00:00:00.001 Started by upstream project "autotest-per-patch" build number 121012 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.024 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.025 The recommended git tool is: git 00:00:00.025 using credential 00000000-0000-0000-0000-000000000002 00:00:00.027 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.073 Using shallow fetch with depth 1 00:00:00.073 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.073 > git --version # timeout=10 00:00:00.102 > git --version # 'git version 2.39.2' 00:00:00.102 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.103 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.103 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.484 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.496 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.510 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:02.510 > git config core.sparsecheckout # timeout=10 00:00:02.521 > git read-tree -mu HEAD # timeout=10 00:00:02.538 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:02.559 Commit message: "pool: attach build logs for failed merge builds" 00:00:02.560 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:02.644 [Pipeline] Start of Pipeline 00:00:02.656 [Pipeline] library 00:00:02.657 Loading library shm_lib@master 00:00:02.657 Library shm_lib@master is cached. Copying from home. 00:00:02.674 [Pipeline] node 00:00:02.681 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.684 [Pipeline] { 00:00:02.692 [Pipeline] catchError 00:00:02.693 [Pipeline] { 00:00:02.702 [Pipeline] wrap 00:00:02.710 [Pipeline] { 00:00:02.715 [Pipeline] stage 00:00:02.716 [Pipeline] { (Prologue) 00:00:02.886 [Pipeline] sh 00:00:03.168 + logger -p user.info -t JENKINS-CI 00:00:03.190 [Pipeline] echo 00:00:03.191 Node: GP11 00:00:03.199 [Pipeline] sh 00:00:03.495 [Pipeline] setCustomBuildProperty 00:00:03.505 [Pipeline] echo 00:00:03.506 Cleanup processes 00:00:03.509 [Pipeline] sh 00:00:03.788 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.788 1492925 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.800 [Pipeline] sh 00:00:04.085 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.086 ++ grep -v 'sudo pgrep' 00:00:04.086 ++ awk '{print $1}' 00:00:04.086 + sudo kill -9 00:00:04.086 + true 00:00:04.101 [Pipeline] cleanWs 00:00:04.112 [WS-CLEANUP] Deleting project workspace... 00:00:04.112 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.118 [WS-CLEANUP] done 00:00:04.121 [Pipeline] setCustomBuildProperty 00:00:04.132 [Pipeline] sh 00:00:04.415 + sudo git config --global --replace-all safe.directory '*' 00:00:04.482 [Pipeline] nodesByLabel 00:00:04.483 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.491 [Pipeline] httpRequest 00:00:04.496 HttpMethod: GET 00:00:04.496 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.502 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.516 Response Code: HTTP/1.1 200 OK 00:00:04.516 Success: Status code 200 is in the accepted range: 200,404 00:00:04.517 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:09.853 [Pipeline] sh 00:00:10.139 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:10.158 [Pipeline] httpRequest 00:00:10.163 HttpMethod: GET 00:00:10.163 URL: http://10.211.164.96/packages/spdk_166ede64d5b441bb2ee49b1f849288e1e3b552e7.tar.gz 00:00:10.165 Sending request to url: http://10.211.164.96/packages/spdk_166ede64d5b441bb2ee49b1f849288e1e3b552e7.tar.gz 00:00:10.186 Response Code: HTTP/1.1 200 OK 00:00:10.186 Success: Status code 200 is in the accepted range: 200,404 00:00:10.187 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_166ede64d5b441bb2ee49b1f849288e1e3b552e7.tar.gz 00:01:09.924 [Pipeline] sh 00:01:10.211 + tar --no-same-owner -xf spdk_166ede64d5b441bb2ee49b1f849288e1e3b552e7.tar.gz 00:01:12.758 [Pipeline] sh 00:01:13.044 + git -C spdk log --oneline -n5 00:01:13.044 166ede64d nvmf/tcp: add nvmf_qpair_set_ctrlr helper function 00:01:13.044 5c8d451f1 app/trace: emit owner descriptions 00:01:13.044 aaaef7578 trace: rename trace_event's poller_id to owner_id 00:01:13.044 98cccbebd trace: add concept of "owner" to trace files 00:01:13.044 bf2cbb6d8 trace: rename "per_lcore_history" to just "data" 00:01:13.057 [Pipeline] } 00:01:13.072 [Pipeline] // stage 00:01:13.080 [Pipeline] stage 00:01:13.082 [Pipeline] { (Prepare) 00:01:13.099 [Pipeline] writeFile 00:01:13.115 [Pipeline] sh 00:01:13.420 + logger -p user.info -t JENKINS-CI 00:01:13.436 [Pipeline] sh 00:01:13.728 + logger -p user.info -t JENKINS-CI 00:01:13.741 [Pipeline] sh 00:01:14.027 + cat autorun-spdk.conf 00:01:14.027 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.027 SPDK_TEST_NVMF=1 00:01:14.027 SPDK_TEST_NVME_CLI=1 00:01:14.027 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.027 SPDK_TEST_NVMF_NICS=e810 00:01:14.027 SPDK_TEST_VFIOUSER=1 00:01:14.027 SPDK_RUN_UBSAN=1 00:01:14.027 NET_TYPE=phy 00:01:14.036 RUN_NIGHTLY=0 00:01:14.041 [Pipeline] readFile 00:01:14.067 [Pipeline] withEnv 00:01:14.069 [Pipeline] { 00:01:14.087 [Pipeline] sh 00:01:14.398 + set -ex 00:01:14.398 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:14.398 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:14.398 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.398 ++ SPDK_TEST_NVMF=1 00:01:14.398 ++ SPDK_TEST_NVME_CLI=1 00:01:14.398 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.398 ++ SPDK_TEST_NVMF_NICS=e810 00:01:14.398 ++ SPDK_TEST_VFIOUSER=1 00:01:14.398 ++ SPDK_RUN_UBSAN=1 00:01:14.398 ++ NET_TYPE=phy 00:01:14.398 ++ RUN_NIGHTLY=0 00:01:14.398 + case $SPDK_TEST_NVMF_NICS in 00:01:14.398 + DRIVERS=ice 00:01:14.398 + [[ tcp == \r\d\m\a ]] 00:01:14.398 + [[ -n ice ]] 00:01:14.398 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:14.398 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:14.398 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:14.398 rmmod: ERROR: Module irdma is not currently loaded 00:01:14.398 rmmod: ERROR: Module i40iw is not currently loaded 00:01:14.398 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:14.398 + true 00:01:14.398 + for D in $DRIVERS 00:01:14.398 + sudo modprobe ice 00:01:14.398 + exit 0 00:01:14.409 [Pipeline] } 00:01:14.430 [Pipeline] // withEnv 00:01:14.434 [Pipeline] } 00:01:14.446 [Pipeline] // stage 00:01:14.456 [Pipeline] catchError 00:01:14.457 [Pipeline] { 00:01:14.471 [Pipeline] timeout 00:01:14.471 Timeout set to expire in 40 min 00:01:14.472 [Pipeline] { 00:01:14.483 [Pipeline] stage 00:01:14.484 [Pipeline] { (Tests) 00:01:14.495 [Pipeline] sh 00:01:14.778 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.778 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.778 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.778 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:14.778 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.778 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:14.778 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:14.778 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:14.778 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:14.778 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:14.778 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.778 + source /etc/os-release 00:01:14.778 ++ NAME='Fedora Linux' 00:01:14.778 ++ VERSION='38 (Cloud Edition)' 00:01:14.778 ++ ID=fedora 00:01:14.778 ++ VERSION_ID=38 00:01:14.778 ++ VERSION_CODENAME= 00:01:14.778 ++ PLATFORM_ID=platform:f38 00:01:14.778 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:14.778 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:14.778 ++ LOGO=fedora-logo-icon 00:01:14.778 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:14.778 ++ HOME_URL=https://fedoraproject.org/ 00:01:14.778 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:14.778 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:14.778 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:14.778 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:14.778 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:14.778 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:14.778 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:14.778 ++ SUPPORT_END=2024-05-14 00:01:14.778 ++ VARIANT='Cloud Edition' 00:01:14.778 ++ VARIANT_ID=cloud 00:01:14.778 + uname -a 00:01:14.778 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:14.778 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.721 Hugepages 00:01:15.721 node hugesize free / total 00:01:15.721 node0 1048576kB 0 / 0 00:01:15.721 node0 2048kB 0 / 0 00:01:15.721 node1 1048576kB 0 / 0 00:01:15.721 node1 2048kB 0 / 0 00:01:15.721 00:01:15.721 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.721 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:15.721 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:15.721 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:15.721 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:15.721 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:15.721 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:15.721 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:15.721 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:15.721 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:15.721 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:15.721 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:15.721 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:15.721 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:15.721 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:15.721 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:15.721 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:15.721 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.721 + rm -f /tmp/spdk-ld-path 00:01:15.721 + source autorun-spdk.conf 00:01:15.721 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.721 ++ SPDK_TEST_NVMF=1 00:01:15.721 ++ SPDK_TEST_NVME_CLI=1 00:01:15.721 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.721 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.721 ++ SPDK_TEST_VFIOUSER=1 00:01:15.721 ++ SPDK_RUN_UBSAN=1 00:01:15.721 ++ NET_TYPE=phy 00:01:15.721 ++ RUN_NIGHTLY=0 00:01:15.721 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.721 + [[ -n '' ]] 00:01:15.721 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.721 + for M in /var/spdk/build-*-manifest.txt 00:01:15.721 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.721 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.721 + for M in /var/spdk/build-*-manifest.txt 00:01:15.721 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.721 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.721 ++ uname 00:01:15.721 + [[ Linux == \L\i\n\u\x ]] 00:01:15.721 + sudo dmesg -T 00:01:15.721 + sudo dmesg --clear 00:01:15.721 + dmesg_pid=1493592 00:01:15.721 + [[ Fedora Linux == FreeBSD ]] 00:01:15.721 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.721 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.721 + sudo dmesg -Tw 00:01:15.721 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.721 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.721 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.721 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.721 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.721 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.721 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.721 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.721 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.721 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.721 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.721 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.721 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.721 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.722 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.722 Test configuration: 00:01:15.722 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.722 SPDK_TEST_NVMF=1 00:01:15.722 SPDK_TEST_NVME_CLI=1 00:01:15.722 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.722 SPDK_TEST_NVMF_NICS=e810 00:01:15.722 SPDK_TEST_VFIOUSER=1 00:01:15.722 SPDK_RUN_UBSAN=1 00:01:15.722 NET_TYPE=phy 00:01:15.983 RUN_NIGHTLY=0 19:31:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.983 19:31:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.983 19:31:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.983 19:31:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.983 19:31:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.983 19:31:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.983 19:31:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.983 19:31:57 -- paths/export.sh@5 -- $ export PATH 00:01:15.983 19:31:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.983 19:31:57 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.983 19:31:57 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:15.983 19:31:57 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713979917.XXXXXX 00:01:15.983 19:31:57 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713979917.HpxPFr 00:01:15.983 19:31:57 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:15.983 19:31:57 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:15.983 19:31:57 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:15.983 19:31:57 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.983 19:31:57 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.983 19:31:57 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:15.983 19:31:57 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:15.983 19:31:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.983 19:31:57 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:15.983 19:31:57 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:15.983 19:31:57 -- pm/common@17 -- $ local monitor 00:01:15.983 19:31:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.983 19:31:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1493626 00:01:15.983 19:31:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.983 19:31:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1493628 00:01:15.983 19:31:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.983 19:31:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1493630 00:01:15.983 19:31:57 -- pm/common@21 -- $ date +%s 00:01:15.983 19:31:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.983 19:31:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1493632 00:01:15.983 19:31:57 -- pm/common@21 -- $ date +%s 00:01:15.983 19:31:57 -- pm/common@26 -- $ sleep 1 00:01:15.983 19:31:57 -- pm/common@21 -- $ date +%s 00:01:15.983 19:31:57 -- pm/common@21 -- $ date +%s 00:01:15.983 19:31:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713979917 00:01:15.983 19:31:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713979917 00:01:15.983 19:31:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713979917 00:01:15.983 19:31:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713979917 00:01:15.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713979917_collect-vmstat.pm.log 00:01:15.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713979917_collect-bmc-pm.bmc.pm.log 00:01:15.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713979917_collect-cpu-load.pm.log 00:01:15.983 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713979917_collect-cpu-temp.pm.log 00:01:16.924 19:31:58 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:16.924 19:31:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.924 19:31:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.924 19:31:58 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.924 19:31:58 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.924 Wed Apr 24 05:31:58 PM UTC 2024 00:01:16.924 19:31:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.924 v24.05-pre-443-g166ede64d 00:01:16.924 19:31:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.924 19:31:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.924 19:31:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.924 19:31:58 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:16.924 19:31:58 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:16.924 19:31:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.924 ************************************ 00:01:16.924 START TEST ubsan 00:01:16.924 ************************************ 00:01:16.924 19:31:58 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:16.924 using ubsan 00:01:16.924 00:01:16.924 real 0m0.000s 00:01:16.924 user 0m0.000s 00:01:16.924 sys 0m0.000s 00:01:16.924 19:31:58 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:16.924 19:31:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.924 ************************************ 00:01:16.924 END TEST ubsan 00:01:16.924 ************************************ 00:01:16.924 19:31:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:16.924 19:31:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:16.924 19:31:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:16.924 19:31:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:16.924 19:31:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:16.924 19:31:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:16.924 19:31:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:16.924 19:31:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:16.924 19:31:58 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:17.185 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:17.185 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:17.445 Using 'verbs' RDMA provider 00:01:28.014 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:38.004 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:38.004 Creating mk/config.mk...done. 00:01:38.004 Creating mk/cc.flags.mk...done. 00:01:38.004 Type 'make' to build. 00:01:38.004 19:32:18 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:38.004 19:32:18 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:38.004 19:32:18 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:38.004 19:32:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.004 ************************************ 00:01:38.004 START TEST make 00:01:38.004 ************************************ 00:01:38.004 19:32:18 -- common/autotest_common.sh@1111 -- $ make -j48 00:01:38.004 make[1]: Nothing to be done for 'all'. 00:01:39.397 The Meson build system 00:01:39.397 Version: 1.3.1 00:01:39.397 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:39.397 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.397 Build type: native build 00:01:39.397 Project name: libvfio-user 00:01:39.397 Project version: 0.0.1 00:01:39.397 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.397 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.397 Host machine cpu family: x86_64 00:01:39.397 Host machine cpu: x86_64 00:01:39.397 Run-time dependency threads found: YES 00:01:39.397 Library dl found: YES 00:01:39.397 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.397 Run-time dependency json-c found: YES 0.17 00:01:39.397 Run-time dependency cmocka found: YES 1.1.7 00:01:39.397 Program pytest-3 found: NO 00:01:39.397 Program flake8 found: NO 00:01:39.397 Program misspell-fixer found: NO 00:01:39.397 Program restructuredtext-lint found: NO 00:01:39.397 Program valgrind found: YES (/usr/bin/valgrind) 00:01:39.397 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.397 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.397 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.397 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.397 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:39.397 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:39.397 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.397 Build targets in project: 8 00:01:39.397 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:39.397 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:39.397 00:01:39.398 libvfio-user 0.0.1 00:01:39.398 00:01:39.398 User defined options 00:01:39.398 buildtype : debug 00:01:39.398 default_library: shared 00:01:39.398 libdir : /usr/local/lib 00:01:39.398 00:01:39.398 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.977 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:40.248 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:40.248 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:40.248 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:40.248 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:40.248 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:40.248 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:40.248 [7/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:40.248 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:40.248 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:40.248 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:40.248 [11/37] Compiling C object samples/null.p/null.c.o 00:01:40.248 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:40.248 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:40.248 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:40.248 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:40.508 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:40.508 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:40.508 [18/37] Compiling C object samples/server.p/server.c.o 00:01:40.508 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:40.508 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:40.508 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:40.508 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:40.508 [23/37] Compiling C object samples/client.p/client.c.o 00:01:40.508 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:40.508 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:40.508 [26/37] Linking target samples/client 00:01:40.508 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:40.508 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:40.508 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:40.508 [30/37] Linking target test/unit_tests 00:01:40.768 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:40.768 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:41.028 [33/37] Linking target samples/server 00:01:41.028 [34/37] Linking target samples/lspci 00:01:41.028 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:41.028 [36/37] Linking target samples/null 00:01:41.028 [37/37] Linking target samples/gpio-pci-idio-16 00:01:41.028 INFO: autodetecting backend as ninja 00:01:41.028 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.028 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.601 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.601 ninja: no work to do. 00:01:46.876 The Meson build system 00:01:46.876 Version: 1.3.1 00:01:46.876 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:46.876 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:46.876 Build type: native build 00:01:46.876 Program cat found: YES (/usr/bin/cat) 00:01:46.876 Project name: DPDK 00:01:46.876 Project version: 23.11.0 00:01:46.876 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:46.876 C linker for the host machine: cc ld.bfd 2.39-16 00:01:46.876 Host machine cpu family: x86_64 00:01:46.876 Host machine cpu: x86_64 00:01:46.876 Message: ## Building in Developer Mode ## 00:01:46.876 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.876 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:46.876 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.876 Program python3 found: YES (/usr/bin/python3) 00:01:46.876 Program cat found: YES (/usr/bin/cat) 00:01:46.876 Compiler for C supports arguments -march=native: YES 00:01:46.876 Checking for size of "void *" : 8 00:01:46.876 Checking for size of "void *" : 8 (cached) 00:01:46.876 Library m found: YES 00:01:46.876 Library numa found: YES 00:01:46.876 Has header "numaif.h" : YES 00:01:46.876 Library fdt found: NO 00:01:46.876 Library execinfo found: NO 00:01:46.876 Has header "execinfo.h" : YES 00:01:46.876 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:46.876 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.876 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.876 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.876 Run-time dependency openssl found: YES 3.0.9 00:01:46.876 Run-time dependency libpcap found: YES 1.10.4 00:01:46.876 Has header "pcap.h" with dependency libpcap: YES 00:01:46.876 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.876 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.876 Compiler for C supports arguments -Wformat: YES 00:01:46.876 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:46.876 Compiler for C supports arguments -Wformat-security: NO 00:01:46.876 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.876 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.876 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.876 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.876 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.876 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.876 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.876 Compiler for C supports arguments -Wundef: YES 00:01:46.876 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.876 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.876 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:46.876 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.876 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:46.876 Program objdump found: YES (/usr/bin/objdump) 00:01:46.876 Compiler for C supports arguments -mavx512f: YES 00:01:46.876 Checking if "AVX512 checking" compiles: YES 00:01:46.876 Fetching value of define "__SSE4_2__" : 1 00:01:46.876 Fetching value of define "__AES__" : 1 00:01:46.876 Fetching value of define "__AVX__" : 1 00:01:46.876 Fetching value of define "__AVX2__" : (undefined) 00:01:46.876 Fetching value of define "__AVX512BW__" : (undefined) 00:01:46.876 Fetching value of define "__AVX512CD__" : (undefined) 00:01:46.876 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:46.876 Fetching value of define "__AVX512F__" : (undefined) 00:01:46.876 Fetching value of define "__AVX512VL__" : (undefined) 00:01:46.876 Fetching value of define "__PCLMUL__" : 1 00:01:46.876 Fetching value of define "__RDRND__" : 1 00:01:46.876 Fetching value of define "__RDSEED__" : (undefined) 00:01:46.876 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:46.876 Fetching value of define "__znver1__" : (undefined) 00:01:46.876 Fetching value of define "__znver2__" : (undefined) 00:01:46.876 Fetching value of define "__znver3__" : (undefined) 00:01:46.876 Fetching value of define "__znver4__" : (undefined) 00:01:46.876 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:46.876 Message: lib/log: Defining dependency "log" 00:01:46.876 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.876 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.876 Checking for function "getentropy" : NO 00:01:46.876 Message: lib/eal: Defining dependency "eal" 00:01:46.876 Message: lib/ring: Defining dependency "ring" 00:01:46.876 Message: lib/rcu: Defining dependency "rcu" 00:01:46.876 Message: lib/mempool: Defining dependency "mempool" 00:01:46.876 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.876 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.876 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:46.876 Compiler for C supports arguments -mpclmul: YES 00:01:46.876 Compiler for C supports arguments -maes: YES 00:01:46.876 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.876 Compiler for C supports arguments -mavx512bw: YES 00:01:46.876 Compiler for C supports arguments -mavx512dq: YES 00:01:46.876 Compiler for C supports arguments -mavx512vl: YES 00:01:46.876 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.876 Compiler for C supports arguments -mavx2: YES 00:01:46.876 Compiler for C supports arguments -mavx: YES 00:01:46.876 Message: lib/net: Defining dependency "net" 00:01:46.876 Message: lib/meter: Defining dependency "meter" 00:01:46.876 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.876 Message: lib/pci: Defining dependency "pci" 00:01:46.876 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.876 Message: lib/hash: Defining dependency "hash" 00:01:46.876 Message: lib/timer: Defining dependency "timer" 00:01:46.876 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.876 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.876 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.876 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.876 Message: lib/power: Defining dependency "power" 00:01:46.876 Message: lib/reorder: Defining dependency "reorder" 00:01:46.876 Message: lib/security: Defining dependency "security" 00:01:46.876 Has header "linux/userfaultfd.h" : YES 00:01:46.876 Has header "linux/vduse.h" : YES 00:01:46.876 Message: lib/vhost: Defining dependency "vhost" 00:01:46.876 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.876 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.876 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.876 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.876 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:46.876 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:46.876 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:46.876 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:46.877 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:46.877 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:46.877 Program doxygen found: YES (/usr/bin/doxygen) 00:01:46.877 Configuring doxy-api-html.conf using configuration 00:01:46.877 Configuring doxy-api-man.conf using configuration 00:01:46.877 Program mandb found: YES (/usr/bin/mandb) 00:01:46.877 Program sphinx-build found: NO 00:01:46.877 Configuring rte_build_config.h using configuration 00:01:46.877 Message: 00:01:46.877 ================= 00:01:46.877 Applications Enabled 00:01:46.877 ================= 00:01:46.877 00:01:46.877 apps: 00:01:46.877 00:01:46.877 00:01:46.877 Message: 00:01:46.877 ================= 00:01:46.877 Libraries Enabled 00:01:46.877 ================= 00:01:46.877 00:01:46.877 libs: 00:01:46.877 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.877 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:46.877 cryptodev, dmadev, power, reorder, security, vhost, 00:01:46.877 00:01:46.877 Message: 00:01:46.877 =============== 00:01:46.877 Drivers Enabled 00:01:46.877 =============== 00:01:46.877 00:01:46.877 common: 00:01:46.877 00:01:46.877 bus: 00:01:46.877 pci, vdev, 00:01:46.877 mempool: 00:01:46.877 ring, 00:01:46.877 dma: 00:01:46.877 00:01:46.877 net: 00:01:46.877 00:01:46.877 crypto: 00:01:46.877 00:01:46.877 compress: 00:01:46.877 00:01:46.877 vdpa: 00:01:46.877 00:01:46.877 00:01:46.877 Message: 00:01:46.877 ================= 00:01:46.877 Content Skipped 00:01:46.877 ================= 00:01:46.877 00:01:46.877 apps: 00:01:46.877 dumpcap: explicitly disabled via build config 00:01:46.877 graph: explicitly disabled via build config 00:01:46.877 pdump: explicitly disabled via build config 00:01:46.877 proc-info: explicitly disabled via build config 00:01:46.877 test-acl: explicitly disabled via build config 00:01:46.877 test-bbdev: explicitly disabled via build config 00:01:46.877 test-cmdline: explicitly disabled via build config 00:01:46.877 test-compress-perf: explicitly disabled via build config 00:01:46.877 test-crypto-perf: explicitly disabled via build config 00:01:46.877 test-dma-perf: explicitly disabled via build config 00:01:46.877 test-eventdev: explicitly disabled via build config 00:01:46.877 test-fib: explicitly disabled via build config 00:01:46.877 test-flow-perf: explicitly disabled via build config 00:01:46.877 test-gpudev: explicitly disabled via build config 00:01:46.877 test-mldev: explicitly disabled via build config 00:01:46.877 test-pipeline: explicitly disabled via build config 00:01:46.877 test-pmd: explicitly disabled via build config 00:01:46.877 test-regex: explicitly disabled via build config 00:01:46.877 test-sad: explicitly disabled via build config 00:01:46.877 test-security-perf: explicitly disabled via build config 00:01:46.877 00:01:46.877 libs: 00:01:46.877 metrics: explicitly disabled via build config 00:01:46.877 acl: explicitly disabled via build config 00:01:46.877 bbdev: explicitly disabled via build config 00:01:46.877 bitratestats: explicitly disabled via build config 00:01:46.877 bpf: explicitly disabled via build config 00:01:46.877 cfgfile: explicitly disabled via build config 00:01:46.877 distributor: explicitly disabled via build config 00:01:46.877 efd: explicitly disabled via build config 00:01:46.877 eventdev: explicitly disabled via build config 00:01:46.877 dispatcher: explicitly disabled via build config 00:01:46.877 gpudev: explicitly disabled via build config 00:01:46.877 gro: explicitly disabled via build config 00:01:46.877 gso: explicitly disabled via build config 00:01:46.877 ip_frag: explicitly disabled via build config 00:01:46.877 jobstats: explicitly disabled via build config 00:01:46.877 latencystats: explicitly disabled via build config 00:01:46.877 lpm: explicitly disabled via build config 00:01:46.877 member: explicitly disabled via build config 00:01:46.877 pcapng: explicitly disabled via build config 00:01:46.877 rawdev: explicitly disabled via build config 00:01:46.877 regexdev: explicitly disabled via build config 00:01:46.877 mldev: explicitly disabled via build config 00:01:46.877 rib: explicitly disabled via build config 00:01:46.877 sched: explicitly disabled via build config 00:01:46.877 stack: explicitly disabled via build config 00:01:46.877 ipsec: explicitly disabled via build config 00:01:46.877 pdcp: explicitly disabled via build config 00:01:46.877 fib: explicitly disabled via build config 00:01:46.877 port: explicitly disabled via build config 00:01:46.877 pdump: explicitly disabled via build config 00:01:46.877 table: explicitly disabled via build config 00:01:46.877 pipeline: explicitly disabled via build config 00:01:46.877 graph: explicitly disabled via build config 00:01:46.877 node: explicitly disabled via build config 00:01:46.877 00:01:46.877 drivers: 00:01:46.877 common/cpt: not in enabled drivers build config 00:01:46.877 common/dpaax: not in enabled drivers build config 00:01:46.877 common/iavf: not in enabled drivers build config 00:01:46.877 common/idpf: not in enabled drivers build config 00:01:46.877 common/mvep: not in enabled drivers build config 00:01:46.877 common/octeontx: not in enabled drivers build config 00:01:46.877 bus/auxiliary: not in enabled drivers build config 00:01:46.877 bus/cdx: not in enabled drivers build config 00:01:46.877 bus/dpaa: not in enabled drivers build config 00:01:46.877 bus/fslmc: not in enabled drivers build config 00:01:46.877 bus/ifpga: not in enabled drivers build config 00:01:46.877 bus/platform: not in enabled drivers build config 00:01:46.877 bus/vmbus: not in enabled drivers build config 00:01:46.877 common/cnxk: not in enabled drivers build config 00:01:46.877 common/mlx5: not in enabled drivers build config 00:01:46.877 common/nfp: not in enabled drivers build config 00:01:46.877 common/qat: not in enabled drivers build config 00:01:46.877 common/sfc_efx: not in enabled drivers build config 00:01:46.877 mempool/bucket: not in enabled drivers build config 00:01:46.877 mempool/cnxk: not in enabled drivers build config 00:01:46.877 mempool/dpaa: not in enabled drivers build config 00:01:46.877 mempool/dpaa2: not in enabled drivers build config 00:01:46.877 mempool/octeontx: not in enabled drivers build config 00:01:46.877 mempool/stack: not in enabled drivers build config 00:01:46.877 dma/cnxk: not in enabled drivers build config 00:01:46.877 dma/dpaa: not in enabled drivers build config 00:01:46.877 dma/dpaa2: not in enabled drivers build config 00:01:46.877 dma/hisilicon: not in enabled drivers build config 00:01:46.877 dma/idxd: not in enabled drivers build config 00:01:46.877 dma/ioat: not in enabled drivers build config 00:01:46.877 dma/skeleton: not in enabled drivers build config 00:01:46.877 net/af_packet: not in enabled drivers build config 00:01:46.877 net/af_xdp: not in enabled drivers build config 00:01:46.877 net/ark: not in enabled drivers build config 00:01:46.877 net/atlantic: not in enabled drivers build config 00:01:46.877 net/avp: not in enabled drivers build config 00:01:46.877 net/axgbe: not in enabled drivers build config 00:01:46.877 net/bnx2x: not in enabled drivers build config 00:01:46.877 net/bnxt: not in enabled drivers build config 00:01:46.877 net/bonding: not in enabled drivers build config 00:01:46.877 net/cnxk: not in enabled drivers build config 00:01:46.877 net/cpfl: not in enabled drivers build config 00:01:46.877 net/cxgbe: not in enabled drivers build config 00:01:46.877 net/dpaa: not in enabled drivers build config 00:01:46.877 net/dpaa2: not in enabled drivers build config 00:01:46.877 net/e1000: not in enabled drivers build config 00:01:46.877 net/ena: not in enabled drivers build config 00:01:46.877 net/enetc: not in enabled drivers build config 00:01:46.877 net/enetfec: not in enabled drivers build config 00:01:46.877 net/enic: not in enabled drivers build config 00:01:46.877 net/failsafe: not in enabled drivers build config 00:01:46.877 net/fm10k: not in enabled drivers build config 00:01:46.877 net/gve: not in enabled drivers build config 00:01:46.877 net/hinic: not in enabled drivers build config 00:01:46.877 net/hns3: not in enabled drivers build config 00:01:46.877 net/i40e: not in enabled drivers build config 00:01:46.877 net/iavf: not in enabled drivers build config 00:01:46.877 net/ice: not in enabled drivers build config 00:01:46.877 net/idpf: not in enabled drivers build config 00:01:46.877 net/igc: not in enabled drivers build config 00:01:46.877 net/ionic: not in enabled drivers build config 00:01:46.877 net/ipn3ke: not in enabled drivers build config 00:01:46.877 net/ixgbe: not in enabled drivers build config 00:01:46.877 net/mana: not in enabled drivers build config 00:01:46.877 net/memif: not in enabled drivers build config 00:01:46.877 net/mlx4: not in enabled drivers build config 00:01:46.877 net/mlx5: not in enabled drivers build config 00:01:46.877 net/mvneta: not in enabled drivers build config 00:01:46.877 net/mvpp2: not in enabled drivers build config 00:01:46.877 net/netvsc: not in enabled drivers build config 00:01:46.877 net/nfb: not in enabled drivers build config 00:01:46.877 net/nfp: not in enabled drivers build config 00:01:46.877 net/ngbe: not in enabled drivers build config 00:01:46.877 net/null: not in enabled drivers build config 00:01:46.877 net/octeontx: not in enabled drivers build config 00:01:46.877 net/octeon_ep: not in enabled drivers build config 00:01:46.877 net/pcap: not in enabled drivers build config 00:01:46.877 net/pfe: not in enabled drivers build config 00:01:46.877 net/qede: not in enabled drivers build config 00:01:46.877 net/ring: not in enabled drivers build config 00:01:46.877 net/sfc: not in enabled drivers build config 00:01:46.877 net/softnic: not in enabled drivers build config 00:01:46.877 net/tap: not in enabled drivers build config 00:01:46.877 net/thunderx: not in enabled drivers build config 00:01:46.877 net/txgbe: not in enabled drivers build config 00:01:46.877 net/vdev_netvsc: not in enabled drivers build config 00:01:46.877 net/vhost: not in enabled drivers build config 00:01:46.877 net/virtio: not in enabled drivers build config 00:01:46.877 net/vmxnet3: not in enabled drivers build config 00:01:46.877 raw/*: missing internal dependency, "rawdev" 00:01:46.877 crypto/armv8: not in enabled drivers build config 00:01:46.877 crypto/bcmfs: not in enabled drivers build config 00:01:46.877 crypto/caam_jr: not in enabled drivers build config 00:01:46.877 crypto/ccp: not in enabled drivers build config 00:01:46.877 crypto/cnxk: not in enabled drivers build config 00:01:46.877 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.878 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.878 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.878 crypto/mlx5: not in enabled drivers build config 00:01:46.878 crypto/mvsam: not in enabled drivers build config 00:01:46.878 crypto/nitrox: not in enabled drivers build config 00:01:46.878 crypto/null: not in enabled drivers build config 00:01:46.878 crypto/octeontx: not in enabled drivers build config 00:01:46.878 crypto/openssl: not in enabled drivers build config 00:01:46.878 crypto/scheduler: not in enabled drivers build config 00:01:46.878 crypto/uadk: not in enabled drivers build config 00:01:46.878 crypto/virtio: not in enabled drivers build config 00:01:46.878 compress/isal: not in enabled drivers build config 00:01:46.878 compress/mlx5: not in enabled drivers build config 00:01:46.878 compress/octeontx: not in enabled drivers build config 00:01:46.878 compress/zlib: not in enabled drivers build config 00:01:46.878 regex/*: missing internal dependency, "regexdev" 00:01:46.878 ml/*: missing internal dependency, "mldev" 00:01:46.878 vdpa/ifc: not in enabled drivers build config 00:01:46.878 vdpa/mlx5: not in enabled drivers build config 00:01:46.878 vdpa/nfp: not in enabled drivers build config 00:01:46.878 vdpa/sfc: not in enabled drivers build config 00:01:46.878 event/*: missing internal dependency, "eventdev" 00:01:46.878 baseband/*: missing internal dependency, "bbdev" 00:01:46.878 gpu/*: missing internal dependency, "gpudev" 00:01:46.878 00:01:46.878 00:01:46.878 Build targets in project: 85 00:01:46.878 00:01:46.878 DPDK 23.11.0 00:01:46.878 00:01:46.878 User defined options 00:01:46.878 buildtype : debug 00:01:46.878 default_library : shared 00:01:46.878 libdir : lib 00:01:46.878 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:46.878 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:46.878 c_link_args : 00:01:46.878 cpu_instruction_set: native 00:01:46.878 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:46.878 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:46.878 enable_docs : false 00:01:46.878 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:46.878 enable_kmods : false 00:01:46.878 tests : false 00:01:46.878 00:01:46.878 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.878 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:46.878 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.139 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.139 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.139 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.139 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.139 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.139 [7/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.139 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.139 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.139 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.139 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.140 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.140 [13/265] Linking static target lib/librte_kvargs.a 00:01:47.140 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.140 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.140 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.140 [17/265] Linking static target lib/librte_log.a 00:01:47.140 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.140 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.140 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.399 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.664 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.925 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.925 [24/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.925 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.925 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:47.925 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.925 [28/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.925 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.925 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.925 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.925 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.925 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:47.925 [34/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.925 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.925 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.925 [37/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.925 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.925 [39/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.925 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.925 [41/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.925 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.925 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.925 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.925 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.925 [46/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.925 [47/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.925 [48/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.925 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.925 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.925 [51/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.925 [52/265] Linking static target lib/librte_telemetry.a 00:01:47.925 [53/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.925 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.188 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.188 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.188 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.188 [58/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.188 [59/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.188 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.188 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.188 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.188 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.188 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.188 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.188 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.188 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.188 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.188 [69/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.188 [70/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.188 [71/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.188 [72/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.188 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.451 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.451 [75/265] Linking static target lib/librte_pci.a 00:01:48.451 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.451 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.451 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.451 [79/265] Linking target lib/librte_log.so.24.0 00:01:48.451 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.451 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.451 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.451 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.451 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.451 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.451 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.718 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:48.718 [88/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.718 [89/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.718 [90/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:48.718 [91/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.983 [92/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.983 [93/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.983 [94/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.983 [95/265] Linking target lib/librte_kvargs.so.24.0 00:01:48.983 [96/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.983 [97/265] Linking static target lib/librte_ring.a 00:01:48.983 [98/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.983 [99/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.983 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.983 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.983 [102/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.983 [103/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.983 [104/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.983 [105/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.983 [106/265] Linking static target lib/librte_meter.a 00:01:48.983 [107/265] Linking static target lib/librte_eal.a 00:01:48.983 [108/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.983 [109/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.983 [110/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.983 [111/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.983 [112/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.983 [113/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.983 [114/265] Linking static target lib/librte_mempool.a 00:01:49.249 [115/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:49.249 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.249 [117/265] Linking static target lib/librte_rcu.a 00:01:49.249 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.250 [119/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:49.250 [120/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.250 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.250 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.250 [123/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.250 [124/265] Linking target lib/librte_telemetry.so.24.0 00:01:49.250 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.250 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:49.250 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.250 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.250 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.250 [130/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.250 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.250 [132/265] Linking static target lib/librte_cmdline.a 00:01:49.513 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.513 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.513 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.513 [136/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.513 [137/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:49.513 [138/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.513 [139/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.513 [140/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.513 [141/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.513 [142/265] Linking static target lib/librte_net.a 00:01:49.513 [143/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:49.513 [144/265] Linking static target lib/librte_timer.a 00:01:49.513 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.773 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.773 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:49.773 [148/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.773 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:49.773 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.773 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:49.773 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:49.773 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.031 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:50.031 [155/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.031 [156/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.031 [157/265] Linking static target lib/librte_dmadev.a 00:01:50.031 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:50.031 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:50.031 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:50.031 [161/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.031 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.031 [163/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.031 [164/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.031 [165/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.031 [166/265] Linking static target lib/librte_hash.a 00:01:50.031 [167/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:50.031 [168/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:50.031 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:50.031 [170/265] Linking static target lib/librte_compressdev.a 00:01:50.290 [171/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.290 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.290 [173/265] Linking static target lib/librte_power.a 00:01:50.290 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.290 [175/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.290 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.290 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:50.290 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.290 [179/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.290 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.290 [181/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:50.290 [182/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:50.290 [183/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.290 [184/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:50.549 [185/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:50.549 [186/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.549 [187/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.549 [188/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.549 [189/265] Linking static target lib/librte_mbuf.a 00:01:50.549 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.549 [191/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:50.549 [192/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.549 [193/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.549 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.549 [195/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.549 [196/265] Linking static target lib/librte_security.a 00:01:50.549 [197/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.549 [198/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.549 [199/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.549 [200/265] Linking static target lib/librte_reorder.a 00:01:50.549 [201/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.549 [202/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.549 [203/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.549 [204/265] Linking static target drivers/librte_bus_vdev.a 00:01:50.549 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.807 [206/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.807 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.807 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:50.807 [209/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:50.807 [210/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.807 [211/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:50.807 [212/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.807 [213/265] Linking static target drivers/librte_mempool_ring.a 00:01:50.807 [214/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.807 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.807 [216/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.807 [217/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.065 [218/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.065 [219/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.323 [220/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.323 [221/265] Linking static target lib/librte_ethdev.a 00:01:51.323 [222/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:51.323 [223/265] Linking static target lib/librte_cryptodev.a 00:01:52.258 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.204 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:55.742 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.742 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.742 [228/265] Linking target lib/librte_eal.so.24.0 00:01:55.742 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:55.742 [230/265] Linking target lib/librte_ring.so.24.0 00:01:55.742 [231/265] Linking target lib/librte_timer.so.24.0 00:01:55.742 [232/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:55.742 [233/265] Linking target lib/librte_meter.so.24.0 00:01:55.743 [234/265] Linking target lib/librte_pci.so.24.0 00:01:55.743 [235/265] Linking target lib/librte_dmadev.so.24.0 00:01:55.743 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:55.743 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:55.743 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:55.743 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:55.743 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:55.743 [241/265] Linking target lib/librte_mempool.so.24.0 00:01:55.743 [242/265] Linking target lib/librte_rcu.so.24.0 00:01:55.743 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:55.743 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:55.743 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:55.743 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:55.743 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:56.001 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:56.001 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:56.001 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:56.001 [251/265] Linking target lib/librte_net.so.24.0 00:01:56.001 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:56.260 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:56.260 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:56.260 [255/265] Linking target lib/librte_security.so.24.0 00:01:56.260 [256/265] Linking target lib/librte_hash.so.24.0 00:01:56.260 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:56.260 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:56.260 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:56.260 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:56.260 [261/265] Linking target lib/librte_power.so.24.0 00:01:58.793 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:58.793 [263/265] Linking static target lib/librte_vhost.a 00:01:59.729 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.729 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:59.729 INFO: autodetecting backend as ninja 00:01:59.729 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:00.664 CC lib/ut_mock/mock.o 00:02:00.664 CC lib/log/log.o 00:02:00.664 CC lib/log/log_flags.o 00:02:00.664 CC lib/log/log_deprecated.o 00:02:00.664 CC lib/ut/ut.o 00:02:00.664 LIB libspdk_ut_mock.a 00:02:00.922 LIB libspdk_log.a 00:02:00.922 SO libspdk_ut_mock.so.6.0 00:02:00.922 LIB libspdk_ut.a 00:02:00.922 SO libspdk_log.so.7.0 00:02:00.922 SO libspdk_ut.so.2.0 00:02:00.922 SYMLINK libspdk_ut_mock.so 00:02:00.922 SYMLINK libspdk_ut.so 00:02:00.922 SYMLINK libspdk_log.so 00:02:01.180 CC lib/dma/dma.o 00:02:01.180 CC lib/util/base64.o 00:02:01.180 CC lib/ioat/ioat.o 00:02:01.180 CXX lib/trace_parser/trace.o 00:02:01.180 CC lib/util/bit_array.o 00:02:01.180 CC lib/util/cpuset.o 00:02:01.180 CC lib/util/crc16.o 00:02:01.180 CC lib/util/crc32.o 00:02:01.180 CC lib/util/crc32c.o 00:02:01.180 CC lib/util/crc32_ieee.o 00:02:01.180 CC lib/util/crc64.o 00:02:01.180 CC lib/util/dif.o 00:02:01.180 CC lib/util/fd.o 00:02:01.180 CC lib/util/file.o 00:02:01.180 CC lib/util/hexlify.o 00:02:01.180 CC lib/util/iov.o 00:02:01.180 CC lib/util/math.o 00:02:01.180 CC lib/util/pipe.o 00:02:01.180 CC lib/util/strerror_tls.o 00:02:01.180 CC lib/util/string.o 00:02:01.180 CC lib/util/uuid.o 00:02:01.180 CC lib/util/fd_group.o 00:02:01.180 CC lib/util/zipf.o 00:02:01.180 CC lib/util/xor.o 00:02:01.180 CC lib/vfio_user/host/vfio_user_pci.o 00:02:01.180 CC lib/vfio_user/host/vfio_user.o 00:02:01.439 LIB libspdk_dma.a 00:02:01.439 SO libspdk_dma.so.4.0 00:02:01.439 LIB libspdk_ioat.a 00:02:01.439 SO libspdk_ioat.so.7.0 00:02:01.439 SYMLINK libspdk_dma.so 00:02:01.439 SYMLINK libspdk_ioat.so 00:02:01.439 LIB libspdk_vfio_user.a 00:02:01.439 SO libspdk_vfio_user.so.5.0 00:02:01.439 SYMLINK libspdk_vfio_user.so 00:02:01.697 LIB libspdk_util.a 00:02:01.697 SO libspdk_util.so.9.0 00:02:01.697 SYMLINK libspdk_util.so 00:02:01.956 CC lib/idxd/idxd.o 00:02:01.956 CC lib/env_dpdk/env.o 00:02:01.956 CC lib/rdma/common.o 00:02:01.956 CC lib/idxd/idxd_user.o 00:02:01.956 CC lib/env_dpdk/memory.o 00:02:01.956 CC lib/conf/conf.o 00:02:01.956 CC lib/rdma/rdma_verbs.o 00:02:01.956 CC lib/env_dpdk/pci.o 00:02:01.956 CC lib/vmd/vmd.o 00:02:01.956 CC lib/vmd/led.o 00:02:01.956 CC lib/json/json_parse.o 00:02:01.956 CC lib/env_dpdk/init.o 00:02:01.956 CC lib/env_dpdk/threads.o 00:02:01.956 CC lib/json/json_util.o 00:02:01.956 CC lib/env_dpdk/pci_ioat.o 00:02:01.956 CC lib/json/json_write.o 00:02:01.956 CC lib/env_dpdk/pci_virtio.o 00:02:01.956 CC lib/env_dpdk/pci_vmd.o 00:02:01.956 CC lib/env_dpdk/pci_idxd.o 00:02:01.956 CC lib/env_dpdk/pci_event.o 00:02:01.956 CC lib/env_dpdk/sigbus_handler.o 00:02:01.956 CC lib/env_dpdk/pci_dpdk.o 00:02:01.956 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:01.956 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:01.956 LIB libspdk_trace_parser.a 00:02:01.956 SO libspdk_trace_parser.so.5.0 00:02:02.215 SYMLINK libspdk_trace_parser.so 00:02:02.215 LIB libspdk_conf.a 00:02:02.215 SO libspdk_conf.so.6.0 00:02:02.215 LIB libspdk_rdma.a 00:02:02.215 SYMLINK libspdk_conf.so 00:02:02.215 LIB libspdk_json.a 00:02:02.215 SO libspdk_rdma.so.6.0 00:02:02.473 SO libspdk_json.so.6.0 00:02:02.473 SYMLINK libspdk_rdma.so 00:02:02.473 SYMLINK libspdk_json.so 00:02:02.473 CC lib/jsonrpc/jsonrpc_server.o 00:02:02.473 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:02.473 LIB libspdk_idxd.a 00:02:02.473 CC lib/jsonrpc/jsonrpc_client.o 00:02:02.473 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:02.473 SO libspdk_idxd.so.12.0 00:02:02.731 SYMLINK libspdk_idxd.so 00:02:02.731 LIB libspdk_vmd.a 00:02:02.731 SO libspdk_vmd.so.6.0 00:02:02.731 SYMLINK libspdk_vmd.so 00:02:02.731 LIB libspdk_jsonrpc.a 00:02:02.989 SO libspdk_jsonrpc.so.6.0 00:02:02.989 SYMLINK libspdk_jsonrpc.so 00:02:02.989 CC lib/rpc/rpc.o 00:02:03.247 LIB libspdk_rpc.a 00:02:03.247 SO libspdk_rpc.so.6.0 00:02:03.505 SYMLINK libspdk_rpc.so 00:02:03.505 CC lib/trace/trace.o 00:02:03.505 CC lib/keyring/keyring.o 00:02:03.505 CC lib/notify/notify.o 00:02:03.505 CC lib/keyring/keyring_rpc.o 00:02:03.505 CC lib/trace/trace_flags.o 00:02:03.505 CC lib/notify/notify_rpc.o 00:02:03.505 CC lib/trace/trace_rpc.o 00:02:03.763 LIB libspdk_notify.a 00:02:03.763 SO libspdk_notify.so.6.0 00:02:03.763 LIB libspdk_keyring.a 00:02:03.763 SYMLINK libspdk_notify.so 00:02:03.763 SO libspdk_keyring.so.1.0 00:02:03.763 LIB libspdk_trace.a 00:02:03.763 SO libspdk_trace.so.10.0 00:02:03.763 SYMLINK libspdk_keyring.so 00:02:04.022 SYMLINK libspdk_trace.so 00:02:04.022 LIB libspdk_env_dpdk.a 00:02:04.022 SO libspdk_env_dpdk.so.14.0 00:02:04.022 CC lib/sock/sock.o 00:02:04.022 CC lib/sock/sock_rpc.o 00:02:04.022 CC lib/thread/thread.o 00:02:04.022 CC lib/thread/iobuf.o 00:02:04.280 SYMLINK libspdk_env_dpdk.so 00:02:04.538 LIB libspdk_sock.a 00:02:04.538 SO libspdk_sock.so.9.0 00:02:04.538 SYMLINK libspdk_sock.so 00:02:04.796 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:04.796 CC lib/nvme/nvme_ctrlr.o 00:02:04.796 CC lib/nvme/nvme_fabric.o 00:02:04.796 CC lib/nvme/nvme_ns_cmd.o 00:02:04.796 CC lib/nvme/nvme_ns.o 00:02:04.796 CC lib/nvme/nvme_pcie_common.o 00:02:04.796 CC lib/nvme/nvme_pcie.o 00:02:04.796 CC lib/nvme/nvme_qpair.o 00:02:04.796 CC lib/nvme/nvme.o 00:02:04.796 CC lib/nvme/nvme_quirks.o 00:02:04.796 CC lib/nvme/nvme_transport.o 00:02:04.796 CC lib/nvme/nvme_discovery.o 00:02:04.796 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:04.796 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:04.796 CC lib/nvme/nvme_tcp.o 00:02:04.796 CC lib/nvme/nvme_opal.o 00:02:04.796 CC lib/nvme/nvme_io_msg.o 00:02:04.796 CC lib/nvme/nvme_poll_group.o 00:02:04.796 CC lib/nvme/nvme_zns.o 00:02:04.796 CC lib/nvme/nvme_stubs.o 00:02:04.796 CC lib/nvme/nvme_auth.o 00:02:04.796 CC lib/nvme/nvme_cuse.o 00:02:04.796 CC lib/nvme/nvme_vfio_user.o 00:02:04.796 CC lib/nvme/nvme_rdma.o 00:02:05.733 LIB libspdk_thread.a 00:02:05.733 SO libspdk_thread.so.10.0 00:02:05.733 SYMLINK libspdk_thread.so 00:02:05.991 CC lib/vfu_tgt/tgt_endpoint.o 00:02:05.991 CC lib/virtio/virtio.o 00:02:05.991 CC lib/blob/blobstore.o 00:02:05.991 CC lib/accel/accel.o 00:02:05.991 CC lib/vfu_tgt/tgt_rpc.o 00:02:05.991 CC lib/init/json_config.o 00:02:05.991 CC lib/virtio/virtio_vhost_user.o 00:02:05.991 CC lib/blob/request.o 00:02:05.991 CC lib/init/subsystem.o 00:02:05.991 CC lib/accel/accel_rpc.o 00:02:05.991 CC lib/virtio/virtio_vfio_user.o 00:02:05.991 CC lib/blob/zeroes.o 00:02:05.991 CC lib/init/subsystem_rpc.o 00:02:05.991 CC lib/accel/accel_sw.o 00:02:05.991 CC lib/virtio/virtio_pci.o 00:02:05.991 CC lib/init/rpc.o 00:02:05.991 CC lib/blob/blob_bs_dev.o 00:02:06.249 LIB libspdk_init.a 00:02:06.250 SO libspdk_init.so.5.0 00:02:06.250 LIB libspdk_virtio.a 00:02:06.250 LIB libspdk_vfu_tgt.a 00:02:06.250 SYMLINK libspdk_init.so 00:02:06.250 SO libspdk_vfu_tgt.so.3.0 00:02:06.250 SO libspdk_virtio.so.7.0 00:02:06.250 SYMLINK libspdk_vfu_tgt.so 00:02:06.250 SYMLINK libspdk_virtio.so 00:02:06.508 CC lib/event/app.o 00:02:06.508 CC lib/event/reactor.o 00:02:06.508 CC lib/event/log_rpc.o 00:02:06.508 CC lib/event/app_rpc.o 00:02:06.508 CC lib/event/scheduler_static.o 00:02:06.767 LIB libspdk_event.a 00:02:06.767 SO libspdk_event.so.13.0 00:02:07.026 SYMLINK libspdk_event.so 00:02:07.026 LIB libspdk_accel.a 00:02:07.026 SO libspdk_accel.so.15.0 00:02:07.026 SYMLINK libspdk_accel.so 00:02:07.026 LIB libspdk_nvme.a 00:02:07.283 SO libspdk_nvme.so.13.0 00:02:07.283 CC lib/bdev/bdev.o 00:02:07.283 CC lib/bdev/bdev_rpc.o 00:02:07.283 CC lib/bdev/bdev_zone.o 00:02:07.283 CC lib/bdev/part.o 00:02:07.283 CC lib/bdev/scsi_nvme.o 00:02:07.542 SYMLINK libspdk_nvme.so 00:02:08.949 LIB libspdk_blob.a 00:02:08.949 SO libspdk_blob.so.11.0 00:02:08.949 SYMLINK libspdk_blob.so 00:02:08.949 CC lib/blobfs/blobfs.o 00:02:08.949 CC lib/blobfs/tree.o 00:02:08.949 CC lib/lvol/lvol.o 00:02:09.884 LIB libspdk_bdev.a 00:02:09.884 SO libspdk_bdev.so.15.0 00:02:09.884 LIB libspdk_blobfs.a 00:02:09.884 SO libspdk_blobfs.so.10.0 00:02:09.884 SYMLINK libspdk_bdev.so 00:02:09.884 SYMLINK libspdk_blobfs.so 00:02:09.884 LIB libspdk_lvol.a 00:02:09.884 SO libspdk_lvol.so.10.0 00:02:09.884 SYMLINK libspdk_lvol.so 00:02:10.152 CC lib/nbd/nbd.o 00:02:10.152 CC lib/nvmf/ctrlr.o 00:02:10.152 CC lib/ublk/ublk.o 00:02:10.152 CC lib/scsi/dev.o 00:02:10.152 CC lib/nbd/nbd_rpc.o 00:02:10.152 CC lib/ftl/ftl_core.o 00:02:10.152 CC lib/ublk/ublk_rpc.o 00:02:10.152 CC lib/scsi/lun.o 00:02:10.152 CC lib/nvmf/ctrlr_discovery.o 00:02:10.152 CC lib/ftl/ftl_init.o 00:02:10.152 CC lib/scsi/port.o 00:02:10.152 CC lib/nvmf/ctrlr_bdev.o 00:02:10.152 CC lib/ftl/ftl_layout.o 00:02:10.152 CC lib/scsi/scsi.o 00:02:10.152 CC lib/nvmf/subsystem.o 00:02:10.152 CC lib/scsi/scsi_bdev.o 00:02:10.152 CC lib/ftl/ftl_debug.o 00:02:10.152 CC lib/nvmf/nvmf.o 00:02:10.152 CC lib/ftl/ftl_io.o 00:02:10.152 CC lib/nvmf/nvmf_rpc.o 00:02:10.152 CC lib/scsi/scsi_pr.o 00:02:10.152 CC lib/scsi/scsi_rpc.o 00:02:10.152 CC lib/nvmf/transport.o 00:02:10.152 CC lib/ftl/ftl_sb.o 00:02:10.152 CC lib/nvmf/tcp.o 00:02:10.152 CC lib/ftl/ftl_l2p.o 00:02:10.153 CC lib/scsi/task.o 00:02:10.153 CC lib/ftl/ftl_l2p_flat.o 00:02:10.153 CC lib/nvmf/vfio_user.o 00:02:10.153 CC lib/ftl/ftl_band.o 00:02:10.153 CC lib/nvmf/rdma.o 00:02:10.153 CC lib/ftl/ftl_nv_cache.o 00:02:10.153 CC lib/ftl/ftl_band_ops.o 00:02:10.153 CC lib/ftl/ftl_writer.o 00:02:10.153 CC lib/ftl/ftl_rq.o 00:02:10.153 CC lib/ftl/ftl_reloc.o 00:02:10.153 CC lib/ftl/ftl_l2p_cache.o 00:02:10.153 CC lib/ftl/ftl_p2l.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:10.153 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:10.411 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:10.411 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:10.411 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:10.411 CC lib/ftl/utils/ftl_conf.o 00:02:10.411 CC lib/ftl/utils/ftl_md.o 00:02:10.411 CC lib/ftl/utils/ftl_mempool.o 00:02:10.411 CC lib/ftl/utils/ftl_bitmap.o 00:02:10.411 CC lib/ftl/utils/ftl_property.o 00:02:10.411 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:10.411 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:10.411 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:10.411 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:10.411 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:10.411 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:10.411 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:10.411 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:10.411 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:10.411 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:10.411 CC lib/ftl/base/ftl_base_dev.o 00:02:10.673 CC lib/ftl/base/ftl_base_bdev.o 00:02:10.673 CC lib/ftl/ftl_trace.o 00:02:10.931 LIB libspdk_nbd.a 00:02:10.931 SO libspdk_nbd.so.7.0 00:02:10.931 SYMLINK libspdk_nbd.so 00:02:10.931 LIB libspdk_scsi.a 00:02:10.931 SO libspdk_scsi.so.9.0 00:02:11.190 SYMLINK libspdk_scsi.so 00:02:11.190 LIB libspdk_ublk.a 00:02:11.190 SO libspdk_ublk.so.3.0 00:02:11.190 SYMLINK libspdk_ublk.so 00:02:11.190 CC lib/iscsi/conn.o 00:02:11.190 CC lib/iscsi/init_grp.o 00:02:11.190 CC lib/vhost/vhost.o 00:02:11.190 CC lib/iscsi/iscsi.o 00:02:11.190 CC lib/vhost/vhost_rpc.o 00:02:11.190 CC lib/iscsi/md5.o 00:02:11.190 CC lib/vhost/vhost_scsi.o 00:02:11.190 CC lib/iscsi/param.o 00:02:11.190 CC lib/vhost/vhost_blk.o 00:02:11.190 CC lib/iscsi/portal_grp.o 00:02:11.190 CC lib/vhost/rte_vhost_user.o 00:02:11.190 CC lib/iscsi/tgt_node.o 00:02:11.190 CC lib/iscsi/iscsi_subsystem.o 00:02:11.190 CC lib/iscsi/iscsi_rpc.o 00:02:11.190 CC lib/iscsi/task.o 00:02:11.449 LIB libspdk_ftl.a 00:02:11.449 SO libspdk_ftl.so.9.0 00:02:12.016 SYMLINK libspdk_ftl.so 00:02:12.583 LIB libspdk_vhost.a 00:02:12.583 SO libspdk_vhost.so.8.0 00:02:12.583 LIB libspdk_nvmf.a 00:02:12.583 SYMLINK libspdk_vhost.so 00:02:12.583 SO libspdk_nvmf.so.18.0 00:02:12.583 LIB libspdk_iscsi.a 00:02:12.842 SO libspdk_iscsi.so.8.0 00:02:12.842 SYMLINK libspdk_nvmf.so 00:02:12.842 SYMLINK libspdk_iscsi.so 00:02:13.101 CC module/vfu_device/vfu_virtio.o 00:02:13.101 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.101 CC module/vfu_device/vfu_virtio_blk.o 00:02:13.101 CC module/vfu_device/vfu_virtio_scsi.o 00:02:13.101 CC module/vfu_device/vfu_virtio_rpc.o 00:02:13.101 CC module/blob/bdev/blob_bdev.o 00:02:13.101 CC module/sock/posix/posix.o 00:02:13.101 CC module/accel/iaa/accel_iaa.o 00:02:13.101 CC module/accel/ioat/accel_ioat.o 00:02:13.101 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.101 CC module/accel/error/accel_error.o 00:02:13.101 CC module/accel/error/accel_error_rpc.o 00:02:13.101 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.101 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.101 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.101 CC module/keyring/file/keyring_rpc.o 00:02:13.101 CC module/keyring/file/keyring.o 00:02:13.101 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.101 CC module/accel/dsa/accel_dsa.o 00:02:13.101 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.360 LIB libspdk_env_dpdk_rpc.a 00:02:13.360 SO libspdk_env_dpdk_rpc.so.6.0 00:02:13.360 SYMLINK libspdk_env_dpdk_rpc.so 00:02:13.360 LIB libspdk_keyring_file.a 00:02:13.360 LIB libspdk_scheduler_gscheduler.a 00:02:13.360 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.360 SO libspdk_scheduler_gscheduler.so.4.0 00:02:13.360 SO libspdk_keyring_file.so.1.0 00:02:13.360 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:13.360 LIB libspdk_accel_error.a 00:02:13.360 LIB libspdk_accel_ioat.a 00:02:13.360 LIB libspdk_scheduler_dynamic.a 00:02:13.360 LIB libspdk_accel_iaa.a 00:02:13.360 SO libspdk_accel_error.so.2.0 00:02:13.360 SO libspdk_accel_ioat.so.6.0 00:02:13.360 SYMLINK libspdk_scheduler_gscheduler.so 00:02:13.360 SO libspdk_scheduler_dynamic.so.4.0 00:02:13.360 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:13.360 SYMLINK libspdk_keyring_file.so 00:02:13.360 SO libspdk_accel_iaa.so.3.0 00:02:13.360 LIB libspdk_accel_dsa.a 00:02:13.618 SYMLINK libspdk_accel_error.so 00:02:13.618 SO libspdk_accel_dsa.so.5.0 00:02:13.618 LIB libspdk_blob_bdev.a 00:02:13.618 SYMLINK libspdk_scheduler_dynamic.so 00:02:13.618 SYMLINK libspdk_accel_ioat.so 00:02:13.618 SYMLINK libspdk_accel_iaa.so 00:02:13.618 SO libspdk_blob_bdev.so.11.0 00:02:13.618 SYMLINK libspdk_accel_dsa.so 00:02:13.618 SYMLINK libspdk_blob_bdev.so 00:02:13.880 LIB libspdk_vfu_device.a 00:02:13.880 SO libspdk_vfu_device.so.3.0 00:02:13.880 CC module/bdev/gpt/gpt.o 00:02:13.880 CC module/bdev/lvol/vbdev_lvol.o 00:02:13.880 CC module/bdev/gpt/vbdev_gpt.o 00:02:13.880 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:13.880 CC module/bdev/error/vbdev_error.o 00:02:13.880 CC module/bdev/error/vbdev_error_rpc.o 00:02:13.880 CC module/bdev/malloc/bdev_malloc.o 00:02:13.880 CC module/blobfs/bdev/blobfs_bdev.o 00:02:13.880 CC module/bdev/delay/vbdev_delay.o 00:02:13.880 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:13.880 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:13.880 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:13.880 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:13.880 CC module/bdev/null/bdev_null.o 00:02:13.880 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:13.880 CC module/bdev/split/vbdev_split_rpc.o 00:02:13.880 CC module/bdev/ftl/bdev_ftl.o 00:02:13.880 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:13.880 CC module/bdev/split/vbdev_split.o 00:02:13.880 CC module/bdev/null/bdev_null_rpc.o 00:02:13.880 CC module/bdev/aio/bdev_aio.o 00:02:13.880 CC module/bdev/raid/bdev_raid.o 00:02:13.880 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:13.880 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:13.880 CC module/bdev/iscsi/bdev_iscsi.o 00:02:13.880 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:13.880 CC module/bdev/aio/bdev_aio_rpc.o 00:02:13.880 CC module/bdev/passthru/vbdev_passthru.o 00:02:13.880 CC module/bdev/raid/bdev_raid_rpc.o 00:02:13.880 CC module/bdev/nvme/bdev_nvme.o 00:02:13.880 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:13.880 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:13.880 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:13.880 CC module/bdev/nvme/nvme_rpc.o 00:02:13.880 CC module/bdev/raid/bdev_raid_sb.o 00:02:13.880 CC module/bdev/raid/raid0.o 00:02:13.880 CC module/bdev/nvme/bdev_mdns_client.o 00:02:13.880 CC module/bdev/raid/raid1.o 00:02:13.880 CC module/bdev/raid/concat.o 00:02:13.880 CC module/bdev/nvme/vbdev_opal.o 00:02:13.880 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:13.880 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:13.880 SYMLINK libspdk_vfu_device.so 00:02:14.139 LIB libspdk_sock_posix.a 00:02:14.139 SO libspdk_sock_posix.so.6.0 00:02:14.139 LIB libspdk_blobfs_bdev.a 00:02:14.139 SO libspdk_blobfs_bdev.so.6.0 00:02:14.139 SYMLINK libspdk_sock_posix.so 00:02:14.397 LIB libspdk_bdev_delay.a 00:02:14.397 LIB libspdk_bdev_split.a 00:02:14.397 SYMLINK libspdk_blobfs_bdev.so 00:02:14.397 LIB libspdk_bdev_error.a 00:02:14.397 LIB libspdk_bdev_ftl.a 00:02:14.397 SO libspdk_bdev_delay.so.6.0 00:02:14.397 SO libspdk_bdev_split.so.6.0 00:02:14.397 LIB libspdk_bdev_null.a 00:02:14.397 SO libspdk_bdev_error.so.6.0 00:02:14.397 SO libspdk_bdev_ftl.so.6.0 00:02:14.397 LIB libspdk_bdev_gpt.a 00:02:14.397 SO libspdk_bdev_null.so.6.0 00:02:14.397 LIB libspdk_bdev_passthru.a 00:02:14.397 LIB libspdk_bdev_zone_block.a 00:02:14.397 SYMLINK libspdk_bdev_delay.so 00:02:14.397 SO libspdk_bdev_gpt.so.6.0 00:02:14.397 SYMLINK libspdk_bdev_split.so 00:02:14.397 SO libspdk_bdev_passthru.so.6.0 00:02:14.397 LIB libspdk_bdev_iscsi.a 00:02:14.397 SO libspdk_bdev_zone_block.so.6.0 00:02:14.397 SYMLINK libspdk_bdev_error.so 00:02:14.397 SYMLINK libspdk_bdev_ftl.so 00:02:14.397 SYMLINK libspdk_bdev_null.so 00:02:14.397 LIB libspdk_bdev_malloc.a 00:02:14.397 SO libspdk_bdev_iscsi.so.6.0 00:02:14.397 SYMLINK libspdk_bdev_gpt.so 00:02:14.397 SO libspdk_bdev_malloc.so.6.0 00:02:14.397 SYMLINK libspdk_bdev_passthru.so 00:02:14.397 LIB libspdk_bdev_aio.a 00:02:14.397 SYMLINK libspdk_bdev_zone_block.so 00:02:14.397 SO libspdk_bdev_aio.so.6.0 00:02:14.397 SYMLINK libspdk_bdev_iscsi.so 00:02:14.397 SYMLINK libspdk_bdev_malloc.so 00:02:14.656 SYMLINK libspdk_bdev_aio.so 00:02:14.656 LIB libspdk_bdev_virtio.a 00:02:14.656 SO libspdk_bdev_virtio.so.6.0 00:02:14.656 LIB libspdk_bdev_lvol.a 00:02:14.656 SO libspdk_bdev_lvol.so.6.0 00:02:14.656 SYMLINK libspdk_bdev_virtio.so 00:02:14.656 SYMLINK libspdk_bdev_lvol.so 00:02:14.915 LIB libspdk_bdev_raid.a 00:02:14.915 SO libspdk_bdev_raid.so.6.0 00:02:15.173 SYMLINK libspdk_bdev_raid.so 00:02:16.109 LIB libspdk_bdev_nvme.a 00:02:16.109 SO libspdk_bdev_nvme.so.7.0 00:02:16.368 SYMLINK libspdk_bdev_nvme.so 00:02:16.626 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:16.626 CC module/event/subsystems/sock/sock.o 00:02:16.626 CC module/event/subsystems/vmd/vmd.o 00:02:16.626 CC module/event/subsystems/iobuf/iobuf.o 00:02:16.626 CC module/event/subsystems/keyring/keyring.o 00:02:16.626 CC module/event/subsystems/scheduler/scheduler.o 00:02:16.626 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:16.626 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:16.626 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:16.885 LIB libspdk_event_sock.a 00:02:16.885 LIB libspdk_event_keyring.a 00:02:16.885 LIB libspdk_event_vhost_blk.a 00:02:16.885 LIB libspdk_event_vfu_tgt.a 00:02:16.885 LIB libspdk_event_scheduler.a 00:02:16.885 LIB libspdk_event_vmd.a 00:02:16.885 SO libspdk_event_sock.so.5.0 00:02:16.885 SO libspdk_event_keyring.so.1.0 00:02:16.885 SO libspdk_event_vhost_blk.so.3.0 00:02:16.885 LIB libspdk_event_iobuf.a 00:02:16.885 SO libspdk_event_vfu_tgt.so.3.0 00:02:16.885 SO libspdk_event_scheduler.so.4.0 00:02:16.885 SO libspdk_event_vmd.so.6.0 00:02:16.885 SO libspdk_event_iobuf.so.3.0 00:02:16.885 SYMLINK libspdk_event_sock.so 00:02:16.885 SYMLINK libspdk_event_keyring.so 00:02:16.885 SYMLINK libspdk_event_vhost_blk.so 00:02:16.885 SYMLINK libspdk_event_vfu_tgt.so 00:02:16.885 SYMLINK libspdk_event_scheduler.so 00:02:16.885 SYMLINK libspdk_event_vmd.so 00:02:16.885 SYMLINK libspdk_event_iobuf.so 00:02:17.144 CC module/event/subsystems/accel/accel.o 00:02:17.144 LIB libspdk_event_accel.a 00:02:17.144 SO libspdk_event_accel.so.6.0 00:02:17.402 SYMLINK libspdk_event_accel.so 00:02:17.402 CC module/event/subsystems/bdev/bdev.o 00:02:17.661 LIB libspdk_event_bdev.a 00:02:17.661 SO libspdk_event_bdev.so.6.0 00:02:17.661 SYMLINK libspdk_event_bdev.so 00:02:17.919 CC module/event/subsystems/scsi/scsi.o 00:02:17.919 CC module/event/subsystems/nbd/nbd.o 00:02:17.919 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.919 CC module/event/subsystems/ublk/ublk.o 00:02:17.919 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:18.177 LIB libspdk_event_nbd.a 00:02:18.177 LIB libspdk_event_ublk.a 00:02:18.177 LIB libspdk_event_scsi.a 00:02:18.177 SO libspdk_event_nbd.so.6.0 00:02:18.177 SO libspdk_event_ublk.so.3.0 00:02:18.177 SO libspdk_event_scsi.so.6.0 00:02:18.177 SYMLINK libspdk_event_nbd.so 00:02:18.177 SYMLINK libspdk_event_ublk.so 00:02:18.177 SYMLINK libspdk_event_scsi.so 00:02:18.177 LIB libspdk_event_nvmf.a 00:02:18.177 SO libspdk_event_nvmf.so.6.0 00:02:18.177 SYMLINK libspdk_event_nvmf.so 00:02:18.177 CC module/event/subsystems/iscsi/iscsi.o 00:02:18.436 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:18.436 LIB libspdk_event_iscsi.a 00:02:18.436 LIB libspdk_event_vhost_scsi.a 00:02:18.436 SO libspdk_event_iscsi.so.6.0 00:02:18.436 SO libspdk_event_vhost_scsi.so.3.0 00:02:18.436 SYMLINK libspdk_event_iscsi.so 00:02:18.436 SYMLINK libspdk_event_vhost_scsi.so 00:02:18.695 SO libspdk.so.6.0 00:02:18.695 SYMLINK libspdk.so 00:02:18.959 CC app/trace_record/trace_record.o 00:02:18.959 CXX app/trace/trace.o 00:02:18.959 CC app/spdk_nvme_identify/identify.o 00:02:18.959 CC test/rpc_client/rpc_client_test.o 00:02:18.959 CC app/spdk_lspci/spdk_lspci.o 00:02:18.959 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.959 CC app/spdk_top/spdk_top.o 00:02:18.959 CC app/spdk_nvme_perf/perf.o 00:02:18.959 TEST_HEADER include/spdk/accel.h 00:02:18.959 TEST_HEADER include/spdk/accel_module.h 00:02:18.959 TEST_HEADER include/spdk/assert.h 00:02:18.959 TEST_HEADER include/spdk/barrier.h 00:02:18.959 TEST_HEADER include/spdk/base64.h 00:02:18.959 TEST_HEADER include/spdk/bdev.h 00:02:18.959 TEST_HEADER include/spdk/bdev_module.h 00:02:18.959 TEST_HEADER include/spdk/bdev_zone.h 00:02:18.959 TEST_HEADER include/spdk/bit_array.h 00:02:18.959 TEST_HEADER include/spdk/bit_pool.h 00:02:18.959 TEST_HEADER include/spdk/blob_bdev.h 00:02:18.959 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:18.959 TEST_HEADER include/spdk/blobfs.h 00:02:18.959 TEST_HEADER include/spdk/blob.h 00:02:18.959 TEST_HEADER include/spdk/conf.h 00:02:18.959 TEST_HEADER include/spdk/config.h 00:02:18.959 TEST_HEADER include/spdk/cpuset.h 00:02:18.959 TEST_HEADER include/spdk/crc16.h 00:02:18.959 CC app/spdk_dd/spdk_dd.o 00:02:18.959 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:18.959 TEST_HEADER include/spdk/crc32.h 00:02:18.959 TEST_HEADER include/spdk/crc64.h 00:02:18.959 TEST_HEADER include/spdk/dif.h 00:02:18.959 CC app/iscsi_tgt/iscsi_tgt.o 00:02:18.959 CC app/nvmf_tgt/nvmf_main.o 00:02:18.959 TEST_HEADER include/spdk/dma.h 00:02:18.959 TEST_HEADER include/spdk/endian.h 00:02:18.959 TEST_HEADER include/spdk/env_dpdk.h 00:02:18.959 TEST_HEADER include/spdk/env.h 00:02:18.959 CC app/vhost/vhost.o 00:02:18.959 TEST_HEADER include/spdk/event.h 00:02:18.959 TEST_HEADER include/spdk/fd_group.h 00:02:18.959 TEST_HEADER include/spdk/fd.h 00:02:18.959 TEST_HEADER include/spdk/file.h 00:02:18.959 TEST_HEADER include/spdk/ftl.h 00:02:18.959 CC examples/sock/hello_world/hello_sock.o 00:02:18.959 TEST_HEADER include/spdk/gpt_spec.h 00:02:18.959 CC examples/ioat/verify/verify.o 00:02:18.959 TEST_HEADER include/spdk/hexlify.h 00:02:18.959 CC examples/nvme/reconnect/reconnect.o 00:02:18.959 CC examples/ioat/perf/perf.o 00:02:18.959 CC app/spdk_tgt/spdk_tgt.o 00:02:18.959 TEST_HEADER include/spdk/histogram_data.h 00:02:18.959 CC examples/nvme/hotplug/hotplug.o 00:02:18.959 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:18.959 TEST_HEADER include/spdk/idxd.h 00:02:18.959 TEST_HEADER include/spdk/idxd_spec.h 00:02:18.959 CC examples/nvme/abort/abort.o 00:02:18.959 TEST_HEADER include/spdk/init.h 00:02:18.959 CC test/env/vtophys/vtophys.o 00:02:18.959 CC examples/accel/perf/accel_perf.o 00:02:18.959 TEST_HEADER include/spdk/ioat.h 00:02:18.959 CC examples/vmd/lsvmd/lsvmd.o 00:02:18.959 CC examples/nvme/arbitration/arbitration.o 00:02:18.959 TEST_HEADER include/spdk/ioat_spec.h 00:02:18.959 CC test/thread/poller_perf/poller_perf.o 00:02:18.959 CC test/nvme/aer/aer.o 00:02:18.959 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:18.959 CC examples/nvme/hello_world/hello_world.o 00:02:18.959 CC examples/idxd/perf/perf.o 00:02:18.959 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:18.959 TEST_HEADER include/spdk/iscsi_spec.h 00:02:18.959 TEST_HEADER include/spdk/json.h 00:02:18.959 CC examples/util/zipf/zipf.o 00:02:18.959 TEST_HEADER include/spdk/jsonrpc.h 00:02:18.959 CC test/event/event_perf/event_perf.o 00:02:18.959 CC app/fio/nvme/fio_plugin.o 00:02:18.959 TEST_HEADER include/spdk/keyring.h 00:02:18.959 TEST_HEADER include/spdk/keyring_module.h 00:02:18.959 TEST_HEADER include/spdk/likely.h 00:02:18.959 TEST_HEADER include/spdk/log.h 00:02:18.959 TEST_HEADER include/spdk/lvol.h 00:02:18.959 TEST_HEADER include/spdk/memory.h 00:02:18.959 TEST_HEADER include/spdk/mmio.h 00:02:18.959 CC examples/blob/hello_world/hello_blob.o 00:02:18.959 CC test/bdev/bdevio/bdevio.o 00:02:18.959 TEST_HEADER include/spdk/nbd.h 00:02:19.221 CC examples/bdev/hello_world/hello_bdev.o 00:02:19.221 TEST_HEADER include/spdk/notify.h 00:02:19.221 TEST_HEADER include/spdk/nvme.h 00:02:19.221 TEST_HEADER include/spdk/nvme_intel.h 00:02:19.221 CC app/fio/bdev/fio_plugin.o 00:02:19.221 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:19.221 CC test/accel/dif/dif.o 00:02:19.221 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:19.221 CC examples/thread/thread/thread_ex.o 00:02:19.221 TEST_HEADER include/spdk/nvme_spec.h 00:02:19.221 CC examples/nvmf/nvmf/nvmf.o 00:02:19.221 CC examples/bdev/bdevperf/bdevperf.o 00:02:19.221 CC test/dma/test_dma/test_dma.o 00:02:19.221 TEST_HEADER include/spdk/nvme_zns.h 00:02:19.221 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:19.221 CC test/blobfs/mkfs/mkfs.o 00:02:19.221 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:19.221 CC test/app/bdev_svc/bdev_svc.o 00:02:19.221 TEST_HEADER include/spdk/nvmf.h 00:02:19.221 TEST_HEADER include/spdk/nvmf_spec.h 00:02:19.221 TEST_HEADER include/spdk/nvmf_transport.h 00:02:19.221 TEST_HEADER include/spdk/opal.h 00:02:19.221 TEST_HEADER include/spdk/opal_spec.h 00:02:19.221 TEST_HEADER include/spdk/pci_ids.h 00:02:19.221 TEST_HEADER include/spdk/pipe.h 00:02:19.221 TEST_HEADER include/spdk/queue.h 00:02:19.221 TEST_HEADER include/spdk/reduce.h 00:02:19.221 TEST_HEADER include/spdk/rpc.h 00:02:19.221 TEST_HEADER include/spdk/scheduler.h 00:02:19.221 TEST_HEADER include/spdk/scsi.h 00:02:19.221 TEST_HEADER include/spdk/scsi_spec.h 00:02:19.221 TEST_HEADER include/spdk/sock.h 00:02:19.221 TEST_HEADER include/spdk/stdinc.h 00:02:19.221 LINK spdk_lspci 00:02:19.221 TEST_HEADER include/spdk/string.h 00:02:19.221 TEST_HEADER include/spdk/thread.h 00:02:19.221 TEST_HEADER include/spdk/trace.h 00:02:19.221 TEST_HEADER include/spdk/trace_parser.h 00:02:19.221 TEST_HEADER include/spdk/tree.h 00:02:19.221 TEST_HEADER include/spdk/ublk.h 00:02:19.221 TEST_HEADER include/spdk/util.h 00:02:19.221 CC test/env/mem_callbacks/mem_callbacks.o 00:02:19.221 TEST_HEADER include/spdk/uuid.h 00:02:19.221 TEST_HEADER include/spdk/version.h 00:02:19.221 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:19.221 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:19.221 TEST_HEADER include/spdk/vhost.h 00:02:19.221 CC test/lvol/esnap/esnap.o 00:02:19.221 TEST_HEADER include/spdk/vmd.h 00:02:19.221 TEST_HEADER include/spdk/xor.h 00:02:19.221 TEST_HEADER include/spdk/zipf.h 00:02:19.221 CXX test/cpp_headers/accel.o 00:02:19.221 LINK rpc_client_test 00:02:19.221 LINK spdk_nvme_discover 00:02:19.488 LINK lsvmd 00:02:19.488 LINK nvmf_tgt 00:02:19.488 LINK vtophys 00:02:19.488 LINK poller_perf 00:02:19.488 LINK event_perf 00:02:19.488 LINK interrupt_tgt 00:02:19.488 LINK zipf 00:02:19.488 LINK vhost 00:02:19.488 LINK iscsi_tgt 00:02:19.488 LINK cmb_copy 00:02:19.488 LINK spdk_trace_record 00:02:19.488 LINK pmr_persistence 00:02:19.488 LINK verify 00:02:19.488 LINK ioat_perf 00:02:19.488 LINK spdk_tgt 00:02:19.488 LINK hello_world 00:02:19.488 LINK hello_sock 00:02:19.488 LINK mkfs 00:02:19.488 LINK bdev_svc 00:02:19.488 LINK hotplug 00:02:19.488 LINK hello_blob 00:02:19.488 LINK hello_bdev 00:02:19.488 LINK thread 00:02:19.488 LINK aer 00:02:19.748 CXX test/cpp_headers/accel_module.o 00:02:19.748 LINK spdk_dd 00:02:19.748 LINK arbitration 00:02:19.748 LINK idxd_perf 00:02:19.748 LINK reconnect 00:02:19.748 LINK nvmf 00:02:19.748 CXX test/cpp_headers/assert.o 00:02:19.748 CXX test/cpp_headers/barrier.o 00:02:19.748 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:19.748 LINK abort 00:02:19.748 CC examples/blob/cli/blobcli.o 00:02:19.748 LINK spdk_trace 00:02:19.748 LINK dif 00:02:19.748 CC test/event/reactor/reactor.o 00:02:19.748 CC test/event/reactor_perf/reactor_perf.o 00:02:19.748 LINK test_dma 00:02:19.748 CC examples/vmd/led/led.o 00:02:19.748 CC test/nvme/reset/reset.o 00:02:19.748 CC test/app/histogram_perf/histogram_perf.o 00:02:19.748 LINK bdevio 00:02:20.029 CC test/app/jsoncat/jsoncat.o 00:02:20.029 CXX test/cpp_headers/base64.o 00:02:20.029 CC test/nvme/sgl/sgl.o 00:02:20.029 CC test/env/pci/pci_ut.o 00:02:20.029 CC test/env/memory/memory_ut.o 00:02:20.029 CXX test/cpp_headers/bdev.o 00:02:20.029 LINK accel_perf 00:02:20.029 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:20.029 CXX test/cpp_headers/bdev_module.o 00:02:20.029 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:20.029 CC test/nvme/overhead/overhead.o 00:02:20.029 CC test/app/stub/stub.o 00:02:20.029 CC test/nvme/e2edp/nvme_dp.o 00:02:20.029 CXX test/cpp_headers/bdev_zone.o 00:02:20.029 LINK nvme_manage 00:02:20.029 CC test/event/app_repeat/app_repeat.o 00:02:20.029 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:20.029 LINK spdk_nvme 00:02:20.029 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:20.029 LINK env_dpdk_post_init 00:02:20.029 LINK spdk_bdev 00:02:20.029 CXX test/cpp_headers/bit_array.o 00:02:20.029 LINK reactor 00:02:20.029 LINK reactor_perf 00:02:20.302 CC test/nvme/err_injection/err_injection.o 00:02:20.302 LINK led 00:02:20.302 LINK jsoncat 00:02:20.302 CC test/nvme/startup/startup.o 00:02:20.302 LINK histogram_perf 00:02:20.302 CC test/nvme/boot_partition/boot_partition.o 00:02:20.302 CC test/nvme/reserve/reserve.o 00:02:20.302 CC test/nvme/simple_copy/simple_copy.o 00:02:20.302 CC test/event/scheduler/scheduler.o 00:02:20.302 CC test/nvme/connect_stress/connect_stress.o 00:02:20.302 CXX test/cpp_headers/bit_pool.o 00:02:20.302 CC test/nvme/compliance/nvme_compliance.o 00:02:20.302 CXX test/cpp_headers/blob_bdev.o 00:02:20.302 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.302 CXX test/cpp_headers/blobfs_bdev.o 00:02:20.302 CXX test/cpp_headers/blobfs.o 00:02:20.302 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:20.302 CXX test/cpp_headers/blob.o 00:02:20.302 CC test/nvme/fdp/fdp.o 00:02:20.302 CXX test/cpp_headers/conf.o 00:02:20.302 CXX test/cpp_headers/config.o 00:02:20.302 CXX test/cpp_headers/cpuset.o 00:02:20.302 LINK app_repeat 00:02:20.302 CXX test/cpp_headers/crc16.o 00:02:20.302 LINK stub 00:02:20.302 CXX test/cpp_headers/crc32.o 00:02:20.302 CXX test/cpp_headers/crc64.o 00:02:20.302 LINK mem_callbacks 00:02:20.302 CC test/nvme/cuse/cuse.o 00:02:20.302 CXX test/cpp_headers/dif.o 00:02:20.569 LINK spdk_nvme_perf 00:02:20.569 LINK reset 00:02:20.569 LINK sgl 00:02:20.569 CXX test/cpp_headers/endian.o 00:02:20.569 CXX test/cpp_headers/dma.o 00:02:20.569 CXX test/cpp_headers/env_dpdk.o 00:02:20.569 LINK spdk_nvme_identify 00:02:20.569 CXX test/cpp_headers/env.o 00:02:20.569 CXX test/cpp_headers/event.o 00:02:20.569 CXX test/cpp_headers/fd_group.o 00:02:20.569 CXX test/cpp_headers/fd.o 00:02:20.569 LINK startup 00:02:20.569 LINK boot_partition 00:02:20.569 LINK overhead 00:02:20.569 LINK nvme_dp 00:02:20.569 LINK err_injection 00:02:20.569 LINK bdevperf 00:02:20.569 LINK connect_stress 00:02:20.569 LINK spdk_top 00:02:20.569 CXX test/cpp_headers/file.o 00:02:20.569 LINK reserve 00:02:20.569 LINK simple_copy 00:02:20.569 CXX test/cpp_headers/ftl.o 00:02:20.569 CXX test/cpp_headers/gpt_spec.o 00:02:20.569 LINK scheduler 00:02:20.569 CXX test/cpp_headers/hexlify.o 00:02:20.569 LINK pci_ut 00:02:20.569 CXX test/cpp_headers/histogram_data.o 00:02:20.836 LINK fused_ordering 00:02:20.836 LINK blobcli 00:02:20.836 CXX test/cpp_headers/idxd.o 00:02:20.836 CXX test/cpp_headers/idxd_spec.o 00:02:20.836 LINK doorbell_aers 00:02:20.836 CXX test/cpp_headers/init.o 00:02:20.836 CXX test/cpp_headers/ioat.o 00:02:20.836 CXX test/cpp_headers/ioat_spec.o 00:02:20.836 CXX test/cpp_headers/iscsi_spec.o 00:02:20.836 LINK nvme_fuzz 00:02:20.836 CXX test/cpp_headers/json.o 00:02:20.836 CXX test/cpp_headers/jsonrpc.o 00:02:20.836 CXX test/cpp_headers/keyring.o 00:02:20.836 CXX test/cpp_headers/keyring_module.o 00:02:20.836 CXX test/cpp_headers/likely.o 00:02:20.836 CXX test/cpp_headers/log.o 00:02:20.836 CXX test/cpp_headers/lvol.o 00:02:20.836 LINK vhost_fuzz 00:02:20.836 CXX test/cpp_headers/memory.o 00:02:20.836 CXX test/cpp_headers/mmio.o 00:02:20.836 CXX test/cpp_headers/nbd.o 00:02:20.836 CXX test/cpp_headers/notify.o 00:02:20.836 CXX test/cpp_headers/nvme.o 00:02:20.836 CXX test/cpp_headers/nvme_intel.o 00:02:20.836 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.836 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.836 CXX test/cpp_headers/nvme_spec.o 00:02:20.836 CXX test/cpp_headers/nvme_zns.o 00:02:20.836 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.836 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:20.836 CXX test/cpp_headers/nvmf.o 00:02:20.836 CXX test/cpp_headers/nvmf_spec.o 00:02:20.836 LINK nvme_compliance 00:02:20.836 CXX test/cpp_headers/nvmf_transport.o 00:02:21.097 CXX test/cpp_headers/opal.o 00:02:21.097 LINK fdp 00:02:21.097 CXX test/cpp_headers/opal_spec.o 00:02:21.097 CXX test/cpp_headers/pci_ids.o 00:02:21.097 CXX test/cpp_headers/pipe.o 00:02:21.097 CXX test/cpp_headers/queue.o 00:02:21.097 CXX test/cpp_headers/reduce.o 00:02:21.097 CXX test/cpp_headers/rpc.o 00:02:21.097 CXX test/cpp_headers/scheduler.o 00:02:21.097 CXX test/cpp_headers/scsi.o 00:02:21.097 CXX test/cpp_headers/scsi_spec.o 00:02:21.097 CXX test/cpp_headers/sock.o 00:02:21.097 CXX test/cpp_headers/stdinc.o 00:02:21.097 CXX test/cpp_headers/string.o 00:02:21.097 CXX test/cpp_headers/thread.o 00:02:21.097 CXX test/cpp_headers/trace.o 00:02:21.097 CXX test/cpp_headers/trace_parser.o 00:02:21.097 CXX test/cpp_headers/tree.o 00:02:21.097 CXX test/cpp_headers/ublk.o 00:02:21.097 CXX test/cpp_headers/util.o 00:02:21.097 CXX test/cpp_headers/uuid.o 00:02:21.097 CXX test/cpp_headers/version.o 00:02:21.097 CXX test/cpp_headers/vfio_user_pci.o 00:02:21.097 CXX test/cpp_headers/vfio_user_spec.o 00:02:21.097 CXX test/cpp_headers/vhost.o 00:02:21.097 CXX test/cpp_headers/vmd.o 00:02:21.097 CXX test/cpp_headers/xor.o 00:02:21.097 CXX test/cpp_headers/zipf.o 00:02:21.667 LINK memory_ut 00:02:21.926 LINK cuse 00:02:22.495 LINK iscsi_fuzz 00:02:25.036 LINK esnap 00:02:25.295 00:02:25.295 real 0m47.750s 00:02:25.295 user 9m57.998s 00:02:25.295 sys 2m26.152s 00:02:25.295 19:33:06 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:25.295 19:33:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.295 ************************************ 00:02:25.295 END TEST make 00:02:25.295 ************************************ 00:02:25.295 19:33:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:25.295 19:33:06 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:25.295 19:33:06 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:25.295 19:33:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.295 19:33:06 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:25.295 19:33:06 -- pm/common@45 -- $ pid=1493640 00:02:25.295 19:33:06 -- pm/common@52 -- $ sudo kill -TERM 1493640 00:02:25.295 19:33:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.295 19:33:06 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:25.295 19:33:06 -- pm/common@45 -- $ pid=1493643 00:02:25.295 19:33:06 -- pm/common@52 -- $ sudo kill -TERM 1493643 00:02:25.295 19:33:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.295 19:33:06 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:25.295 19:33:06 -- pm/common@45 -- $ pid=1493641 00:02:25.295 19:33:06 -- pm/common@52 -- $ sudo kill -TERM 1493641 00:02:25.295 19:33:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.295 19:33:06 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:25.295 19:33:06 -- pm/common@45 -- $ pid=1493642 00:02:25.295 19:33:06 -- pm/common@52 -- $ sudo kill -TERM 1493642 00:02:25.295 19:33:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.295 19:33:06 -- nvmf/common.sh@7 -- # uname -s 00:02:25.295 19:33:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.295 19:33:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.295 19:33:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.295 19:33:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.295 19:33:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.295 19:33:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.295 19:33:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.295 19:33:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.295 19:33:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.295 19:33:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.295 19:33:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:25.295 19:33:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:25.295 19:33:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.295 19:33:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.295 19:33:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:25.295 19:33:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:25.295 19:33:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:25.295 19:33:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.295 19:33:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.295 19:33:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.295 19:33:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.295 19:33:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.295 19:33:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.295 19:33:06 -- paths/export.sh@5 -- # export PATH 00:02:25.295 19:33:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.295 19:33:06 -- nvmf/common.sh@47 -- # : 0 00:02:25.295 19:33:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:25.295 19:33:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:25.295 19:33:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:25.295 19:33:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.295 19:33:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.295 19:33:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:25.295 19:33:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:25.295 19:33:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:25.295 19:33:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.295 19:33:06 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.295 19:33:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.295 19:33:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.295 19:33:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.295 19:33:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.295 19:33:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.295 19:33:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.555 19:33:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.555 19:33:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.555 19:33:06 -- spdk/autotest.sh@48 -- # udevadm_pid=1548479 00:02:25.555 19:33:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.555 19:33:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:25.555 19:33:06 -- pm/common@17 -- # local monitor 00:02:25.555 19:33:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.555 19:33:06 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1548481 00:02:25.555 19:33:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.555 19:33:06 -- pm/common@21 -- # date +%s 00:02:25.555 19:33:06 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1548484 00:02:25.555 19:33:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.555 19:33:06 -- pm/common@21 -- # date +%s 00:02:25.555 19:33:06 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1548487 00:02:25.555 19:33:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.555 19:33:06 -- pm/common@21 -- # date +%s 00:02:25.555 19:33:06 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1548491 00:02:25.555 19:33:06 -- pm/common@26 -- # sleep 1 00:02:25.555 19:33:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713979986 00:02:25.555 19:33:06 -- pm/common@21 -- # date +%s 00:02:25.555 19:33:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713979986 00:02:25.555 19:33:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713979986 00:02:25.555 19:33:06 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713979986 00:02:25.555 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713979986_collect-vmstat.pm.log 00:02:25.555 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713979986_collect-bmc-pm.bmc.pm.log 00:02:25.555 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713979986_collect-cpu-load.pm.log 00:02:25.555 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713979986_collect-cpu-temp.pm.log 00:02:26.493 19:33:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:26.493 19:33:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:26.493 19:33:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:26.493 19:33:07 -- common/autotest_common.sh@10 -- # set +x 00:02:26.493 19:33:07 -- spdk/autotest.sh@59 -- # create_test_list 00:02:26.493 19:33:07 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:26.493 19:33:07 -- common/autotest_common.sh@10 -- # set +x 00:02:26.493 19:33:07 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:26.493 19:33:07 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.493 19:33:07 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.493 19:33:07 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:26.493 19:33:07 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.493 19:33:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:26.493 19:33:07 -- common/autotest_common.sh@1441 -- # uname 00:02:26.493 19:33:07 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:26.493 19:33:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:26.493 19:33:07 -- common/autotest_common.sh@1461 -- # uname 00:02:26.493 19:33:07 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:26.493 19:33:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:26.493 19:33:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:26.493 19:33:07 -- spdk/autotest.sh@72 -- # hash lcov 00:02:26.493 19:33:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:26.494 19:33:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:26.494 --rc lcov_branch_coverage=1 00:02:26.494 --rc lcov_function_coverage=1 00:02:26.494 --rc genhtml_branch_coverage=1 00:02:26.494 --rc genhtml_function_coverage=1 00:02:26.494 --rc genhtml_legend=1 00:02:26.494 --rc geninfo_all_blocks=1 00:02:26.494 ' 00:02:26.494 19:33:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:26.494 --rc lcov_branch_coverage=1 00:02:26.494 --rc lcov_function_coverage=1 00:02:26.494 --rc genhtml_branch_coverage=1 00:02:26.494 --rc genhtml_function_coverage=1 00:02:26.494 --rc genhtml_legend=1 00:02:26.494 --rc geninfo_all_blocks=1 00:02:26.494 ' 00:02:26.494 19:33:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:26.494 --rc lcov_branch_coverage=1 00:02:26.494 --rc lcov_function_coverage=1 00:02:26.494 --rc genhtml_branch_coverage=1 00:02:26.494 --rc genhtml_function_coverage=1 00:02:26.494 --rc genhtml_legend=1 00:02:26.494 --rc geninfo_all_blocks=1 00:02:26.494 --no-external' 00:02:26.494 19:33:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:26.494 --rc lcov_branch_coverage=1 00:02:26.494 --rc lcov_function_coverage=1 00:02:26.494 --rc genhtml_branch_coverage=1 00:02:26.494 --rc genhtml_function_coverage=1 00:02:26.494 --rc genhtml_legend=1 00:02:26.494 --rc geninfo_all_blocks=1 00:02:26.494 --no-external' 00:02:26.494 19:33:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:26.494 lcov: LCOV version 1.14 00:02:26.494 19:33:07 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:36.463 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:36.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:36.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:36.464 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:40.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:40.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:50.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:50.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:50.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:50.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:50.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:50.686 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:58.790 19:33:39 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:58.790 19:33:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:58.790 19:33:39 -- common/autotest_common.sh@10 -- # set +x 00:02:58.790 19:33:39 -- spdk/autotest.sh@91 -- # rm -f 00:02:58.790 19:33:39 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.357 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:59.357 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:59.357 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:59.357 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:59.357 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:59.357 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:59.357 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:59.357 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:59.357 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:59.357 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:59.357 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:59.357 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:59.615 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:59.615 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:59.615 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:59.615 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:59.615 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:59.615 19:33:41 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:59.615 19:33:41 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:59.615 19:33:41 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:59.615 19:33:41 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:59.615 19:33:41 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:59.615 19:33:41 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:59.615 19:33:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:59.615 19:33:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.615 19:33:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:59.615 19:33:41 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:59.615 19:33:41 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:59.615 19:33:41 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:59.615 19:33:41 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:59.615 19:33:41 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:59.615 19:33:41 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:59.615 No valid GPT data, bailing 00:02:59.615 19:33:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:59.615 19:33:41 -- scripts/common.sh@391 -- # pt= 00:02:59.615 19:33:41 -- scripts/common.sh@392 -- # return 1 00:02:59.615 19:33:41 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:59.615 1+0 records in 00:02:59.615 1+0 records out 00:02:59.615 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00209502 s, 501 MB/s 00:02:59.615 19:33:41 -- spdk/autotest.sh@118 -- # sync 00:02:59.615 19:33:41 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:59.615 19:33:41 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:59.615 19:33:41 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:01.514 19:33:43 -- spdk/autotest.sh@124 -- # uname -s 00:03:01.514 19:33:43 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:01.514 19:33:43 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.514 19:33:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:01.514 19:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:01.514 19:33:43 -- common/autotest_common.sh@10 -- # set +x 00:03:01.772 ************************************ 00:03:01.772 START TEST setup.sh 00:03:01.772 ************************************ 00:03:01.772 19:33:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.772 * Looking for test storage... 00:03:01.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:01.772 19:33:43 -- setup/test-setup.sh@10 -- # uname -s 00:03:01.772 19:33:43 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:01.772 19:33:43 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:01.772 19:33:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:01.772 19:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:01.772 19:33:43 -- common/autotest_common.sh@10 -- # set +x 00:03:01.772 ************************************ 00:03:01.772 START TEST acl 00:03:01.772 ************************************ 00:03:01.772 19:33:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:02.030 * Looking for test storage... 00:03:02.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.030 19:33:43 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:02.030 19:33:43 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:02.030 19:33:43 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:02.030 19:33:43 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:02.030 19:33:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:02.030 19:33:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:02.030 19:33:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:02.030 19:33:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.030 19:33:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:02.030 19:33:43 -- setup/acl.sh@12 -- # devs=() 00:03:02.030 19:33:43 -- setup/acl.sh@12 -- # declare -a devs 00:03:02.030 19:33:43 -- setup/acl.sh@13 -- # drivers=() 00:03:02.030 19:33:43 -- setup/acl.sh@13 -- # declare -A drivers 00:03:02.030 19:33:43 -- setup/acl.sh@51 -- # setup reset 00:03:02.030 19:33:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.030 19:33:43 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.405 19:33:44 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:03.405 19:33:44 -- setup/acl.sh@16 -- # local dev driver 00:03:03.405 19:33:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.405 19:33:44 -- setup/acl.sh@15 -- # setup output status 00:03:03.405 19:33:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.405 19:33:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:04.342 Hugepages 00:03:04.342 node hugesize free / total 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # continue 00:03:04.342 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # continue 00:03:04.342 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # continue 00:03:04.342 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.342 00:03:04.342 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # continue 00:03:04.342 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:04.342 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.342 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.342 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:04.342 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.342 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.342 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.342 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:04.342 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.343 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.343 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.343 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:04.343 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.343 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.343 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # continue 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:04.603 19:33:45 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:04.603 19:33:45 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:04.603 19:33:45 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:04.603 19:33:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.603 19:33:45 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:04.603 19:33:45 -- setup/acl.sh@54 -- # run_test denied denied 00:03:04.603 19:33:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.603 19:33:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.603 19:33:45 -- common/autotest_common.sh@10 -- # set +x 00:03:04.603 ************************************ 00:03:04.603 START TEST denied 00:03:04.603 ************************************ 00:03:04.603 19:33:46 -- common/autotest_common.sh@1111 -- # denied 00:03:04.603 19:33:46 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:04.603 19:33:46 -- setup/acl.sh@38 -- # setup output config 00:03:04.603 19:33:46 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:04.603 19:33:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.603 19:33:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:06.505 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:06.505 19:33:47 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:06.505 19:33:47 -- setup/acl.sh@28 -- # local dev driver 00:03:06.505 19:33:47 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:06.505 19:33:47 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:06.505 19:33:47 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:06.505 19:33:47 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:06.505 19:33:47 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:06.505 19:33:47 -- setup/acl.sh@41 -- # setup reset 00:03:06.505 19:33:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.505 19:33:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.440 00:03:08.440 real 0m3.832s 00:03:08.440 user 0m1.105s 00:03:08.440 sys 0m1.836s 00:03:08.440 19:33:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:08.440 19:33:49 -- common/autotest_common.sh@10 -- # set +x 00:03:08.440 ************************************ 00:03:08.440 END TEST denied 00:03:08.440 ************************************ 00:03:08.440 19:33:49 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:08.440 19:33:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.440 19:33:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.440 19:33:49 -- common/autotest_common.sh@10 -- # set +x 00:03:08.699 ************************************ 00:03:08.699 START TEST allowed 00:03:08.699 ************************************ 00:03:08.699 19:33:50 -- common/autotest_common.sh@1111 -- # allowed 00:03:08.699 19:33:50 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:08.699 19:33:50 -- setup/acl.sh@45 -- # setup output config 00:03:08.699 19:33:50 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:08.699 19:33:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.699 19:33:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.229 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:11.229 19:33:52 -- setup/acl.sh@47 -- # verify 00:03:11.229 19:33:52 -- setup/acl.sh@28 -- # local dev driver 00:03:11.229 19:33:52 -- setup/acl.sh@48 -- # setup reset 00:03:11.229 19:33:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.229 19:33:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.602 00:03:12.602 real 0m3.924s 00:03:12.602 user 0m1.045s 00:03:12.602 sys 0m1.727s 00:03:12.602 19:33:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.602 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:03:12.602 ************************************ 00:03:12.602 END TEST allowed 00:03:12.602 ************************************ 00:03:12.602 00:03:12.602 real 0m10.689s 00:03:12.602 user 0m3.270s 00:03:12.602 sys 0m5.422s 00:03:12.602 19:33:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:12.602 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:03:12.602 ************************************ 00:03:12.602 END TEST acl 00:03:12.602 ************************************ 00:03:12.602 19:33:53 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:12.602 19:33:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.602 19:33:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.602 19:33:53 -- common/autotest_common.sh@10 -- # set +x 00:03:12.602 ************************************ 00:03:12.602 START TEST hugepages 00:03:12.602 ************************************ 00:03:12.602 19:33:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:12.602 * Looking for test storage... 00:03:12.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.862 19:33:54 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:12.862 19:33:54 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:12.862 19:33:54 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:12.862 19:33:54 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:12.862 19:33:54 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:12.862 19:33:54 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:12.862 19:33:54 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:12.862 19:33:54 -- setup/common.sh@18 -- # local node= 00:03:12.863 19:33:54 -- setup/common.sh@19 -- # local var val 00:03:12.863 19:33:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.863 19:33:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.863 19:33:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.863 19:33:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.863 19:33:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.863 19:33:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 34742784 kB' 'MemAvailable: 39899800 kB' 'Buffers: 2696 kB' 'Cached: 18798176 kB' 'SwapCached: 0 kB' 'Active: 14698320 kB' 'Inactive: 4646328 kB' 'Active(anon): 14084328 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547032 kB' 'Mapped: 240224 kB' 'Shmem: 13540552 kB' 'KReclaimable: 541132 kB' 'Slab: 933216 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 392084 kB' 'KernelStack: 12960 kB' 'PageTables: 9904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 15265396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196568 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.863 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.863 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # continue 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.864 19:33:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.864 19:33:54 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.864 19:33:54 -- setup/common.sh@33 -- # echo 2048 00:03:12.864 19:33:54 -- setup/common.sh@33 -- # return 0 00:03:12.864 19:33:54 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:12.864 19:33:54 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:12.864 19:33:54 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:12.864 19:33:54 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:12.864 19:33:54 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:12.864 19:33:54 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:12.864 19:33:54 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:12.864 19:33:54 -- setup/hugepages.sh@207 -- # get_nodes 00:03:12.864 19:33:54 -- setup/hugepages.sh@27 -- # local node 00:03:12.864 19:33:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.864 19:33:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:12.864 19:33:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.864 19:33:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.864 19:33:54 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.864 19:33:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.864 19:33:54 -- setup/hugepages.sh@208 -- # clear_hp 00:03:12.864 19:33:54 -- setup/hugepages.sh@37 -- # local node hp 00:03:12.864 19:33:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.864 19:33:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.864 19:33:54 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.864 19:33:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.864 19:33:54 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.864 19:33:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.864 19:33:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.864 19:33:54 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.864 19:33:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.864 19:33:54 -- setup/hugepages.sh@41 -- # echo 0 00:03:12.864 19:33:54 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:12.864 19:33:54 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:12.864 19:33:54 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:12.864 19:33:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:12.864 19:33:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:12.864 19:33:54 -- common/autotest_common.sh@10 -- # set +x 00:03:12.864 ************************************ 00:03:12.864 START TEST default_setup 00:03:12.864 ************************************ 00:03:12.864 19:33:54 -- common/autotest_common.sh@1111 -- # default_setup 00:03:12.864 19:33:54 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:12.864 19:33:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.864 19:33:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.864 19:33:54 -- setup/hugepages.sh@51 -- # shift 00:03:12.864 19:33:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.864 19:33:54 -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.864 19:33:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.864 19:33:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.864 19:33:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.864 19:33:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.864 19:33:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.864 19:33:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.864 19:33:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.864 19:33:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.864 19:33:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.864 19:33:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.864 19:33:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.864 19:33:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.864 19:33:54 -- setup/hugepages.sh@73 -- # return 0 00:03:12.864 19:33:54 -- setup/hugepages.sh@137 -- # setup output 00:03:12.864 19:33:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.864 19:33:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.244 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.244 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.244 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.244 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.244 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.244 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.244 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.244 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.244 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:14.244 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:14.244 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:14.244 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:14.244 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:14.244 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:14.244 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:14.244 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:15.186 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:15.186 19:33:56 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:15.186 19:33:56 -- setup/hugepages.sh@89 -- # local node 00:03:15.186 19:33:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.186 19:33:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.186 19:33:56 -- setup/hugepages.sh@92 -- # local surp 00:03:15.186 19:33:56 -- setup/hugepages.sh@93 -- # local resv 00:03:15.186 19:33:56 -- setup/hugepages.sh@94 -- # local anon 00:03:15.186 19:33:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.186 19:33:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.186 19:33:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.186 19:33:56 -- setup/common.sh@18 -- # local node= 00:03:15.186 19:33:56 -- setup/common.sh@19 -- # local var val 00:03:15.186 19:33:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.186 19:33:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.186 19:33:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.186 19:33:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.186 19:33:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.186 19:33:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 19:33:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36854000 kB' 'MemAvailable: 42011016 kB' 'Buffers: 2696 kB' 'Cached: 18798424 kB' 'SwapCached: 0 kB' 'Active: 14721788 kB' 'Inactive: 4646328 kB' 'Active(anon): 14107796 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570792 kB' 'Mapped: 240748 kB' 'Shmem: 13540800 kB' 'KReclaimable: 541132 kB' 'Slab: 932976 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391844 kB' 'KernelStack: 12848 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15291752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196668 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.186 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.186 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.187 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.187 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.188 19:33:56 -- setup/common.sh@33 -- # echo 0 00:03:15.188 19:33:56 -- setup/common.sh@33 -- # return 0 00:03:15.188 19:33:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.188 19:33:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.188 19:33:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.188 19:33:56 -- setup/common.sh@18 -- # local node= 00:03:15.188 19:33:56 -- setup/common.sh@19 -- # local var val 00:03:15.188 19:33:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.188 19:33:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.188 19:33:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.188 19:33:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.188 19:33:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.188 19:33:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36853736 kB' 'MemAvailable: 42010752 kB' 'Buffers: 2696 kB' 'Cached: 18798428 kB' 'SwapCached: 0 kB' 'Active: 14716320 kB' 'Inactive: 4646328 kB' 'Active(anon): 14102328 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564948 kB' 'Mapped: 240400 kB' 'Shmem: 13540804 kB' 'KReclaimable: 541132 kB' 'Slab: 932976 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391844 kB' 'KernelStack: 12848 kB' 'PageTables: 9400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15286660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.188 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.188 19:33:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.189 19:33:56 -- setup/common.sh@33 -- # echo 0 00:03:15.189 19:33:56 -- setup/common.sh@33 -- # return 0 00:03:15.189 19:33:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.189 19:33:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.189 19:33:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.189 19:33:56 -- setup/common.sh@18 -- # local node= 00:03:15.189 19:33:56 -- setup/common.sh@19 -- # local var val 00:03:15.189 19:33:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.189 19:33:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.189 19:33:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.189 19:33:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.189 19:33:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.189 19:33:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.189 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.189 19:33:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36853736 kB' 'MemAvailable: 42010752 kB' 'Buffers: 2696 kB' 'Cached: 18798440 kB' 'SwapCached: 0 kB' 'Active: 14716144 kB' 'Inactive: 4646328 kB' 'Active(anon): 14102152 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564668 kB' 'Mapped: 240304 kB' 'Shmem: 13540816 kB' 'KReclaimable: 541132 kB' 'Slab: 933008 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391876 kB' 'KernelStack: 12832 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196632 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:15.189 19:33:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.190 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.190 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.191 19:33:56 -- setup/common.sh@33 -- # echo 0 00:03:15.191 19:33:56 -- setup/common.sh@33 -- # return 0 00:03:15.191 19:33:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.191 19:33:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.191 nr_hugepages=1024 00:03:15.191 19:33:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.191 resv_hugepages=0 00:03:15.191 19:33:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.191 surplus_hugepages=0 00:03:15.191 19:33:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.191 anon_hugepages=0 00:03:15.191 19:33:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.191 19:33:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.191 19:33:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.191 19:33:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.191 19:33:56 -- setup/common.sh@18 -- # local node= 00:03:15.191 19:33:56 -- setup/common.sh@19 -- # local var val 00:03:15.191 19:33:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.191 19:33:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.191 19:33:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.191 19:33:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.191 19:33:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.191 19:33:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36854060 kB' 'MemAvailable: 42011076 kB' 'Buffers: 2696 kB' 'Cached: 18798440 kB' 'SwapCached: 0 kB' 'Active: 14715944 kB' 'Inactive: 4646328 kB' 'Active(anon): 14101952 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564548 kB' 'Mapped: 240304 kB' 'Shmem: 13540816 kB' 'KReclaimable: 541132 kB' 'Slab: 933008 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391876 kB' 'KernelStack: 12880 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196632 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.191 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.191 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.192 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.192 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.453 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.453 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.454 19:33:56 -- setup/common.sh@33 -- # echo 1024 00:03:15.454 19:33:56 -- setup/common.sh@33 -- # return 0 00:03:15.454 19:33:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.454 19:33:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.454 19:33:56 -- setup/hugepages.sh@27 -- # local node 00:03:15.454 19:33:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.454 19:33:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.454 19:33:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.454 19:33:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:15.454 19:33:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.454 19:33:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.454 19:33:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.454 19:33:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.454 19:33:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.454 19:33:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.454 19:33:56 -- setup/common.sh@18 -- # local node=0 00:03:15.454 19:33:56 -- setup/common.sh@19 -- # local var val 00:03:15.454 19:33:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.454 19:33:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.454 19:33:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.454 19:33:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.454 19:33:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.454 19:33:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21859940 kB' 'MemUsed: 10969944 kB' 'SwapCached: 0 kB' 'Active: 7185628 kB' 'Inactive: 268120 kB' 'Active(anon): 6785352 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 268120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7303576 kB' 'Mapped: 60944 kB' 'AnonPages: 153500 kB' 'Shmem: 6635180 kB' 'KernelStack: 7560 kB' 'PageTables: 4684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281476 kB' 'Slab: 517336 kB' 'SReclaimable: 281476 kB' 'SUnreclaim: 235860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.454 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.454 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # continue 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.455 19:33:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.455 19:33:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.455 19:33:56 -- setup/common.sh@33 -- # echo 0 00:03:15.455 19:33:56 -- setup/common.sh@33 -- # return 0 00:03:15.455 19:33:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.455 19:33:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.455 19:33:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.455 19:33:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.455 19:33:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:15.455 node0=1024 expecting 1024 00:03:15.455 19:33:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:15.455 00:03:15.455 real 0m2.473s 00:03:15.455 user 0m0.660s 00:03:15.455 sys 0m0.867s 00:03:15.455 19:33:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:15.455 19:33:56 -- common/autotest_common.sh@10 -- # set +x 00:03:15.455 ************************************ 00:03:15.455 END TEST default_setup 00:03:15.455 ************************************ 00:03:15.455 19:33:56 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:15.455 19:33:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.455 19:33:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.455 19:33:56 -- common/autotest_common.sh@10 -- # set +x 00:03:15.455 ************************************ 00:03:15.455 START TEST per_node_1G_alloc 00:03:15.455 ************************************ 00:03:15.455 19:33:56 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:15.455 19:33:56 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:15.455 19:33:56 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:15.455 19:33:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:15.455 19:33:56 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:15.455 19:33:56 -- setup/hugepages.sh@51 -- # shift 00:03:15.455 19:33:56 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:15.455 19:33:56 -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.456 19:33:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.456 19:33:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:15.456 19:33:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:15.456 19:33:56 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:15.456 19:33:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.456 19:33:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:15.456 19:33:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.456 19:33:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.456 19:33:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.456 19:33:56 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:15.456 19:33:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.456 19:33:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.456 19:33:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.456 19:33:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:15.456 19:33:56 -- setup/hugepages.sh@73 -- # return 0 00:03:15.456 19:33:56 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:15.456 19:33:56 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:15.456 19:33:56 -- setup/hugepages.sh@146 -- # setup output 00:03:15.456 19:33:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.456 19:33:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.395 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:16.395 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.395 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:16.395 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:16.395 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:16.395 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:16.395 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:16.395 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:16.395 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:16.656 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:16.656 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:16.656 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:16.656 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:16.656 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:16.656 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:16.656 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:16.656 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:16.656 19:33:58 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:16.656 19:33:58 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:16.656 19:33:58 -- setup/hugepages.sh@89 -- # local node 00:03:16.656 19:33:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.656 19:33:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.656 19:33:58 -- setup/hugepages.sh@92 -- # local surp 00:03:16.656 19:33:58 -- setup/hugepages.sh@93 -- # local resv 00:03:16.656 19:33:58 -- setup/hugepages.sh@94 -- # local anon 00:03:16.656 19:33:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.656 19:33:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.656 19:33:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.656 19:33:58 -- setup/common.sh@18 -- # local node= 00:03:16.656 19:33:58 -- setup/common.sh@19 -- # local var val 00:03:16.656 19:33:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.656 19:33:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.656 19:33:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.656 19:33:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.656 19:33:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.656 19:33:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36855232 kB' 'MemAvailable: 42012248 kB' 'Buffers: 2696 kB' 'Cached: 18798508 kB' 'SwapCached: 0 kB' 'Active: 14717064 kB' 'Inactive: 4646328 kB' 'Active(anon): 14103072 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565304 kB' 'Mapped: 240356 kB' 'Shmem: 13540884 kB' 'KReclaimable: 541132 kB' 'Slab: 932992 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391860 kB' 'KernelStack: 12928 kB' 'PageTables: 9460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196824 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.656 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.656 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.657 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.657 19:33:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.658 19:33:58 -- setup/common.sh@33 -- # echo 0 00:03:16.658 19:33:58 -- setup/common.sh@33 -- # return 0 00:03:16.658 19:33:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:16.658 19:33:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.658 19:33:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.658 19:33:58 -- setup/common.sh@18 -- # local node= 00:03:16.658 19:33:58 -- setup/common.sh@19 -- # local var val 00:03:16.658 19:33:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.658 19:33:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.658 19:33:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.658 19:33:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.658 19:33:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.658 19:33:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36855212 kB' 'MemAvailable: 42012228 kB' 'Buffers: 2696 kB' 'Cached: 18798508 kB' 'SwapCached: 0 kB' 'Active: 14717056 kB' 'Inactive: 4646328 kB' 'Active(anon): 14103064 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565328 kB' 'Mapped: 240332 kB' 'Shmem: 13540884 kB' 'KReclaimable: 541132 kB' 'Slab: 932972 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391840 kB' 'KernelStack: 12928 kB' 'PageTables: 9448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196808 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.658 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.658 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.659 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.659 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.660 19:33:58 -- setup/common.sh@33 -- # echo 0 00:03:16.660 19:33:58 -- setup/common.sh@33 -- # return 0 00:03:16.660 19:33:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:16.660 19:33:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.660 19:33:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.660 19:33:58 -- setup/common.sh@18 -- # local node= 00:03:16.660 19:33:58 -- setup/common.sh@19 -- # local var val 00:03:16.660 19:33:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.660 19:33:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.660 19:33:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.660 19:33:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.660 19:33:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.660 19:33:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36852960 kB' 'MemAvailable: 42009976 kB' 'Buffers: 2696 kB' 'Cached: 18798516 kB' 'SwapCached: 0 kB' 'Active: 14716144 kB' 'Inactive: 4646328 kB' 'Active(anon): 14102152 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564372 kB' 'Mapped: 240392 kB' 'Shmem: 13540892 kB' 'KReclaimable: 541132 kB' 'Slab: 932972 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391840 kB' 'KernelStack: 12848 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196760 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.660 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.660 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.661 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.661 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.923 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.923 19:33:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.924 19:33:58 -- setup/common.sh@33 -- # echo 0 00:03:16.924 19:33:58 -- setup/common.sh@33 -- # return 0 00:03:16.924 19:33:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:16.924 19:33:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.924 nr_hugepages=1024 00:03:16.924 19:33:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.924 resv_hugepages=0 00:03:16.924 19:33:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.924 surplus_hugepages=0 00:03:16.924 19:33:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.924 anon_hugepages=0 00:03:16.924 19:33:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.924 19:33:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.924 19:33:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.924 19:33:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.924 19:33:58 -- setup/common.sh@18 -- # local node= 00:03:16.924 19:33:58 -- setup/common.sh@19 -- # local var val 00:03:16.924 19:33:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.924 19:33:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.924 19:33:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.924 19:33:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.924 19:33:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.924 19:33:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36851980 kB' 'MemAvailable: 42008996 kB' 'Buffers: 2696 kB' 'Cached: 18798540 kB' 'SwapCached: 0 kB' 'Active: 14716388 kB' 'Inactive: 4646328 kB' 'Active(anon): 14102396 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564652 kB' 'Mapped: 240332 kB' 'Shmem: 13540916 kB' 'KReclaimable: 541132 kB' 'Slab: 932988 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391856 kB' 'KernelStack: 12864 kB' 'PageTables: 9244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196776 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.924 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.924 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.925 19:33:58 -- setup/common.sh@33 -- # echo 1024 00:03:16.925 19:33:58 -- setup/common.sh@33 -- # return 0 00:03:16.925 19:33:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.925 19:33:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.925 19:33:58 -- setup/hugepages.sh@27 -- # local node 00:03:16.925 19:33:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.925 19:33:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.925 19:33:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.925 19:33:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:16.925 19:33:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.925 19:33:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.925 19:33:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.925 19:33:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.925 19:33:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.925 19:33:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.925 19:33:58 -- setup/common.sh@18 -- # local node=0 00:03:16.925 19:33:58 -- setup/common.sh@19 -- # local var val 00:03:16.925 19:33:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.925 19:33:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.925 19:33:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.925 19:33:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.925 19:33:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.925 19:33:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22902036 kB' 'MemUsed: 9927848 kB' 'SwapCached: 0 kB' 'Active: 7185760 kB' 'Inactive: 268120 kB' 'Active(anon): 6785484 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 268120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7303676 kB' 'Mapped: 60944 kB' 'AnonPages: 153352 kB' 'Shmem: 6635280 kB' 'KernelStack: 7592 kB' 'PageTables: 4684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281476 kB' 'Slab: 517280 kB' 'SReclaimable: 281476 kB' 'SUnreclaim: 235804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.925 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.925 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@33 -- # echo 0 00:03:16.926 19:33:58 -- setup/common.sh@33 -- # return 0 00:03:16.926 19:33:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.926 19:33:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.926 19:33:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.926 19:33:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:16.926 19:33:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.926 19:33:58 -- setup/common.sh@18 -- # local node=1 00:03:16.926 19:33:58 -- setup/common.sh@19 -- # local var val 00:03:16.926 19:33:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:16.926 19:33:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.926 19:33:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:16.926 19:33:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:16.926 19:33:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.926 19:33:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 13949944 kB' 'MemUsed: 13761880 kB' 'SwapCached: 0 kB' 'Active: 7531012 kB' 'Inactive: 4378208 kB' 'Active(anon): 7317296 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4378208 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11497576 kB' 'Mapped: 179388 kB' 'AnonPages: 411692 kB' 'Shmem: 6905652 kB' 'KernelStack: 5320 kB' 'PageTables: 4724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 259656 kB' 'Slab: 415676 kB' 'SReclaimable: 259656 kB' 'SUnreclaim: 156020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.926 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.926 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # continue 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:16.927 19:33:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:16.927 19:33:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.927 19:33:58 -- setup/common.sh@33 -- # echo 0 00:03:16.927 19:33:58 -- setup/common.sh@33 -- # return 0 00:03:16.927 19:33:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.927 19:33:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.927 19:33:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.927 19:33:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.927 19:33:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:16.927 node0=512 expecting 512 00:03:16.927 19:33:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.927 19:33:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.927 19:33:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.927 19:33:58 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:16.927 node1=512 expecting 512 00:03:16.927 19:33:58 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:16.927 00:03:16.927 real 0m1.393s 00:03:16.927 user 0m0.572s 00:03:16.927 sys 0m0.776s 00:03:16.927 19:33:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:16.927 19:33:58 -- common/autotest_common.sh@10 -- # set +x 00:03:16.927 ************************************ 00:03:16.927 END TEST per_node_1G_alloc 00:03:16.927 ************************************ 00:03:16.927 19:33:58 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:16.927 19:33:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:16.927 19:33:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:16.927 19:33:58 -- common/autotest_common.sh@10 -- # set +x 00:03:16.927 ************************************ 00:03:16.927 START TEST even_2G_alloc 00:03:16.927 ************************************ 00:03:16.927 19:33:58 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:16.928 19:33:58 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:16.928 19:33:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:16.928 19:33:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:16.928 19:33:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:16.928 19:33:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:16.928 19:33:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:16.928 19:33:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:16.928 19:33:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:16.928 19:33:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:16.928 19:33:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:16.928 19:33:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:16.928 19:33:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:16.928 19:33:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:16.928 19:33:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:16.928 19:33:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.928 19:33:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:16.928 19:33:58 -- setup/hugepages.sh@83 -- # : 512 00:03:16.928 19:33:58 -- setup/hugepages.sh@84 -- # : 1 00:03:16.928 19:33:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.928 19:33:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:16.928 19:33:58 -- setup/hugepages.sh@83 -- # : 0 00:03:16.928 19:33:58 -- setup/hugepages.sh@84 -- # : 0 00:03:16.928 19:33:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:16.928 19:33:58 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:16.928 19:33:58 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:16.928 19:33:58 -- setup/hugepages.sh@153 -- # setup output 00:03:16.928 19:33:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.928 19:33:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.308 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.308 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.308 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.308 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.308 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.308 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.309 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.309 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.309 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.309 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.309 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.309 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.309 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.309 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.309 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.309 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.309 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.309 19:33:59 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:18.309 19:33:59 -- setup/hugepages.sh@89 -- # local node 00:03:18.309 19:33:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.309 19:33:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.309 19:33:59 -- setup/hugepages.sh@92 -- # local surp 00:03:18.309 19:33:59 -- setup/hugepages.sh@93 -- # local resv 00:03:18.309 19:33:59 -- setup/hugepages.sh@94 -- # local anon 00:03:18.309 19:33:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.309 19:33:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.309 19:33:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.309 19:33:59 -- setup/common.sh@18 -- # local node= 00:03:18.309 19:33:59 -- setup/common.sh@19 -- # local var val 00:03:18.309 19:33:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.309 19:33:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.309 19:33:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.309 19:33:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.309 19:33:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.309 19:33:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36862000 kB' 'MemAvailable: 42019016 kB' 'Buffers: 2696 kB' 'Cached: 18798616 kB' 'SwapCached: 0 kB' 'Active: 14717196 kB' 'Inactive: 4646328 kB' 'Active(anon): 14103204 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565428 kB' 'Mapped: 240364 kB' 'Shmem: 13540992 kB' 'KReclaimable: 541132 kB' 'Slab: 932712 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391580 kB' 'KernelStack: 12928 kB' 'PageTables: 9452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196792 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.309 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.309 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.310 19:33:59 -- setup/common.sh@33 -- # echo 0 00:03:18.310 19:33:59 -- setup/common.sh@33 -- # return 0 00:03:18.310 19:33:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:18.310 19:33:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.310 19:33:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.310 19:33:59 -- setup/common.sh@18 -- # local node= 00:03:18.310 19:33:59 -- setup/common.sh@19 -- # local var val 00:03:18.310 19:33:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.310 19:33:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.310 19:33:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.310 19:33:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.310 19:33:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.310 19:33:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36862372 kB' 'MemAvailable: 42019388 kB' 'Buffers: 2696 kB' 'Cached: 18798616 kB' 'SwapCached: 0 kB' 'Active: 14717232 kB' 'Inactive: 4646328 kB' 'Active(anon): 14103240 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565476 kB' 'Mapped: 240344 kB' 'Shmem: 13540992 kB' 'KReclaimable: 541132 kB' 'Slab: 932688 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391556 kB' 'KernelStack: 12928 kB' 'PageTables: 9444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196760 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.310 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.310 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.311 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.311 19:33:59 -- setup/common.sh@33 -- # echo 0 00:03:18.311 19:33:59 -- setup/common.sh@33 -- # return 0 00:03:18.311 19:33:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:18.311 19:33:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.311 19:33:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.311 19:33:59 -- setup/common.sh@18 -- # local node= 00:03:18.311 19:33:59 -- setup/common.sh@19 -- # local var val 00:03:18.311 19:33:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.311 19:33:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.311 19:33:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.311 19:33:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.311 19:33:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.311 19:33:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.311 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36862680 kB' 'MemAvailable: 42019696 kB' 'Buffers: 2696 kB' 'Cached: 18798628 kB' 'SwapCached: 0 kB' 'Active: 14716824 kB' 'Inactive: 4646328 kB' 'Active(anon): 14102832 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565104 kB' 'Mapped: 240344 kB' 'Shmem: 13541004 kB' 'KReclaimable: 541132 kB' 'Slab: 932720 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391588 kB' 'KernelStack: 12912 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196760 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.312 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.312 19:33:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.313 19:33:59 -- setup/common.sh@33 -- # echo 0 00:03:18.313 19:33:59 -- setup/common.sh@33 -- # return 0 00:03:18.313 19:33:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:18.313 19:33:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.313 nr_hugepages=1024 00:03:18.313 19:33:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.313 resv_hugepages=0 00:03:18.313 19:33:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.313 surplus_hugepages=0 00:03:18.313 19:33:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.313 anon_hugepages=0 00:03:18.313 19:33:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.313 19:33:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.313 19:33:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.313 19:33:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.313 19:33:59 -- setup/common.sh@18 -- # local node= 00:03:18.313 19:33:59 -- setup/common.sh@19 -- # local var val 00:03:18.313 19:33:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.313 19:33:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.313 19:33:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.313 19:33:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.313 19:33:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.313 19:33:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36863116 kB' 'MemAvailable: 42020132 kB' 'Buffers: 2696 kB' 'Cached: 18798644 kB' 'SwapCached: 0 kB' 'Active: 14716852 kB' 'Inactive: 4646328 kB' 'Active(anon): 14102860 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 565108 kB' 'Mapped: 240344 kB' 'Shmem: 13541020 kB' 'KReclaimable: 541132 kB' 'Slab: 932720 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391588 kB' 'KernelStack: 12912 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15285956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196776 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.313 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.313 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.314 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.314 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.314 19:33:59 -- setup/common.sh@33 -- # echo 1024 00:03:18.314 19:33:59 -- setup/common.sh@33 -- # return 0 00:03:18.314 19:33:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.314 19:33:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.315 19:33:59 -- setup/hugepages.sh@27 -- # local node 00:03:18.315 19:33:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.315 19:33:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.315 19:33:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.315 19:33:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.315 19:33:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.315 19:33:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.315 19:33:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.315 19:33:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.315 19:33:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.315 19:33:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.315 19:33:59 -- setup/common.sh@18 -- # local node=0 00:03:18.315 19:33:59 -- setup/common.sh@19 -- # local var val 00:03:18.315 19:33:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.315 19:33:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.315 19:33:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.315 19:33:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.315 19:33:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.315 19:33:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22912968 kB' 'MemUsed: 9916916 kB' 'SwapCached: 0 kB' 'Active: 7184812 kB' 'Inactive: 268120 kB' 'Active(anon): 6784536 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 268120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7303768 kB' 'Mapped: 60928 kB' 'AnonPages: 152356 kB' 'Shmem: 6635372 kB' 'KernelStack: 7576 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281476 kB' 'Slab: 517172 kB' 'SReclaimable: 281476 kB' 'SUnreclaim: 235696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.315 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.315 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.316 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.316 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.316 19:33:59 -- setup/common.sh@33 -- # echo 0 00:03:18.316 19:33:59 -- setup/common.sh@33 -- # return 0 00:03:18.577 19:33:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.577 19:33:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.577 19:33:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.577 19:33:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.577 19:33:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.577 19:33:59 -- setup/common.sh@18 -- # local node=1 00:03:18.577 19:33:59 -- setup/common.sh@19 -- # local var val 00:03:18.577 19:33:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.577 19:33:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.577 19:33:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.577 19:33:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.577 19:33:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.577 19:33:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 13950856 kB' 'MemUsed: 13760968 kB' 'SwapCached: 0 kB' 'Active: 7531840 kB' 'Inactive: 4378208 kB' 'Active(anon): 7318124 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4378208 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11497588 kB' 'Mapped: 179416 kB' 'AnonPages: 412496 kB' 'Shmem: 6905664 kB' 'KernelStack: 5320 kB' 'PageTables: 4756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 259656 kB' 'Slab: 415548 kB' 'SReclaimable: 259656 kB' 'SUnreclaim: 155892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.577 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.577 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # continue 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.578 19:33:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.578 19:33:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.578 19:33:59 -- setup/common.sh@33 -- # echo 0 00:03:18.578 19:33:59 -- setup/common.sh@33 -- # return 0 00:03:18.578 19:33:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.578 19:33:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.578 19:33:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.578 19:33:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.578 19:33:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:18.578 node0=512 expecting 512 00:03:18.578 19:33:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.578 19:33:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.578 19:33:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.578 19:33:59 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:18.578 node1=512 expecting 512 00:03:18.578 19:33:59 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:18.578 00:03:18.578 real 0m1.475s 00:03:18.578 user 0m0.646s 00:03:18.578 sys 0m0.789s 00:03:18.578 19:33:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:18.578 19:33:59 -- common/autotest_common.sh@10 -- # set +x 00:03:18.578 ************************************ 00:03:18.578 END TEST even_2G_alloc 00:03:18.578 ************************************ 00:03:18.578 19:33:59 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:18.578 19:33:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.578 19:33:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.578 19:33:59 -- common/autotest_common.sh@10 -- # set +x 00:03:18.578 ************************************ 00:03:18.578 START TEST odd_alloc 00:03:18.578 ************************************ 00:03:18.578 19:33:59 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:18.578 19:33:59 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:18.578 19:33:59 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:18.578 19:33:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.578 19:33:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.578 19:33:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:18.578 19:33:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.578 19:33:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.578 19:33:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.578 19:33:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:18.578 19:33:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.578 19:33:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.578 19:33:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.578 19:33:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.578 19:33:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.578 19:33:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.578 19:33:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:18.578 19:33:59 -- setup/hugepages.sh@83 -- # : 513 00:03:18.578 19:33:59 -- setup/hugepages.sh@84 -- # : 1 00:03:18.578 19:33:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.578 19:33:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:18.578 19:33:59 -- setup/hugepages.sh@83 -- # : 0 00:03:18.578 19:33:59 -- setup/hugepages.sh@84 -- # : 0 00:03:18.578 19:33:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.578 19:33:59 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:18.578 19:33:59 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:18.578 19:33:59 -- setup/hugepages.sh@160 -- # setup output 00:03:18.578 19:33:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.578 19:33:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.513 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.513 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.513 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.513 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.513 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.513 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.513 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.513 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.513 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:19.513 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.513 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.513 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.513 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.513 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.513 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.513 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.513 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:19.776 19:34:01 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:19.776 19:34:01 -- setup/hugepages.sh@89 -- # local node 00:03:19.776 19:34:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.776 19:34:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.776 19:34:01 -- setup/hugepages.sh@92 -- # local surp 00:03:19.776 19:34:01 -- setup/hugepages.sh@93 -- # local resv 00:03:19.776 19:34:01 -- setup/hugepages.sh@94 -- # local anon 00:03:19.776 19:34:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.776 19:34:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.776 19:34:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.776 19:34:01 -- setup/common.sh@18 -- # local node= 00:03:19.776 19:34:01 -- setup/common.sh@19 -- # local var val 00:03:19.776 19:34:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.776 19:34:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.776 19:34:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.776 19:34:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.776 19:34:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.776 19:34:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36859192 kB' 'MemAvailable: 42016208 kB' 'Buffers: 2696 kB' 'Cached: 18798716 kB' 'SwapCached: 0 kB' 'Active: 14714356 kB' 'Inactive: 4646328 kB' 'Active(anon): 14100364 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562104 kB' 'Mapped: 239824 kB' 'Shmem: 13541092 kB' 'KReclaimable: 541132 kB' 'Slab: 932804 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391672 kB' 'KernelStack: 12896 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15277544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196680 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.776 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.776 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.777 19:34:01 -- setup/common.sh@33 -- # echo 0 00:03:19.777 19:34:01 -- setup/common.sh@33 -- # return 0 00:03:19.777 19:34:01 -- setup/hugepages.sh@97 -- # anon=0 00:03:19.777 19:34:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.777 19:34:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.777 19:34:01 -- setup/common.sh@18 -- # local node= 00:03:19.777 19:34:01 -- setup/common.sh@19 -- # local var val 00:03:19.777 19:34:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.777 19:34:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.777 19:34:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.777 19:34:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.777 19:34:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.777 19:34:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36857920 kB' 'MemAvailable: 42014936 kB' 'Buffers: 2696 kB' 'Cached: 18798720 kB' 'SwapCached: 0 kB' 'Active: 14715712 kB' 'Inactive: 4646328 kB' 'Active(anon): 14101720 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563956 kB' 'Mapped: 240172 kB' 'Shmem: 13541096 kB' 'KReclaimable: 541132 kB' 'Slab: 932788 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391656 kB' 'KernelStack: 12848 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15275312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.777 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.777 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.778 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.778 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.778 19:34:01 -- setup/common.sh@33 -- # echo 0 00:03:19.778 19:34:01 -- setup/common.sh@33 -- # return 0 00:03:19.778 19:34:01 -- setup/hugepages.sh@99 -- # surp=0 00:03:19.778 19:34:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.778 19:34:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.778 19:34:01 -- setup/common.sh@18 -- # local node= 00:03:19.778 19:34:01 -- setup/common.sh@19 -- # local var val 00:03:19.778 19:34:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.778 19:34:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.779 19:34:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.779 19:34:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.779 19:34:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.779 19:34:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36852088 kB' 'MemAvailable: 42009104 kB' 'Buffers: 2696 kB' 'Cached: 18798732 kB' 'SwapCached: 0 kB' 'Active: 14718744 kB' 'Inactive: 4646328 kB' 'Active(anon): 14104752 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 566944 kB' 'Mapped: 239744 kB' 'Shmem: 13541108 kB' 'KReclaimable: 541132 kB' 'Slab: 932772 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391640 kB' 'KernelStack: 12880 kB' 'PageTables: 9136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15278508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196684 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.779 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.779 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.780 19:34:01 -- setup/common.sh@33 -- # echo 0 00:03:19.780 19:34:01 -- setup/common.sh@33 -- # return 0 00:03:19.780 19:34:01 -- setup/hugepages.sh@100 -- # resv=0 00:03:19.780 19:34:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:19.780 nr_hugepages=1025 00:03:19.780 19:34:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.780 resv_hugepages=0 00:03:19.780 19:34:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.780 surplus_hugepages=0 00:03:19.780 19:34:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.780 anon_hugepages=0 00:03:19.780 19:34:01 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:19.780 19:34:01 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:19.780 19:34:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.780 19:34:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.780 19:34:01 -- setup/common.sh@18 -- # local node= 00:03:19.780 19:34:01 -- setup/common.sh@19 -- # local var val 00:03:19.780 19:34:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.780 19:34:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.780 19:34:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.780 19:34:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.780 19:34:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.780 19:34:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36852088 kB' 'MemAvailable: 42009104 kB' 'Buffers: 2696 kB' 'Cached: 18798744 kB' 'SwapCached: 0 kB' 'Active: 14713216 kB' 'Inactive: 4646328 kB' 'Active(anon): 14099224 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561376 kB' 'Mapped: 239724 kB' 'Shmem: 13541120 kB' 'KReclaimable: 541132 kB' 'Slab: 932772 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391640 kB' 'KernelStack: 12864 kB' 'PageTables: 9092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 15272400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.780 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.780 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # continue 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.781 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.781 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.781 19:34:01 -- setup/common.sh@33 -- # echo 1025 00:03:19.781 19:34:01 -- setup/common.sh@33 -- # return 0 00:03:19.782 19:34:01 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:19.782 19:34:01 -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.782 19:34:01 -- setup/hugepages.sh@27 -- # local node 00:03:19.782 19:34:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.782 19:34:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.782 19:34:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.782 19:34:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:19.782 19:34:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.782 19:34:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.782 19:34:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.782 19:34:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.043 19:34:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.043 19:34:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.043 19:34:01 -- setup/common.sh@18 -- # local node=0 00:03:20.043 19:34:01 -- setup/common.sh@19 -- # local var val 00:03:20.043 19:34:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.043 19:34:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.043 19:34:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.043 19:34:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.043 19:34:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.043 19:34:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22917808 kB' 'MemUsed: 9912076 kB' 'SwapCached: 0 kB' 'Active: 7181900 kB' 'Inactive: 268120 kB' 'Active(anon): 6781624 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 268120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7303808 kB' 'Mapped: 59968 kB' 'AnonPages: 149344 kB' 'Shmem: 6635412 kB' 'KernelStack: 7544 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281476 kB' 'Slab: 517092 kB' 'SReclaimable: 281476 kB' 'SUnreclaim: 235616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.043 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.043 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@33 -- # echo 0 00:03:20.044 19:34:01 -- setup/common.sh@33 -- # return 0 00:03:20.044 19:34:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.044 19:34:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.044 19:34:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.044 19:34:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.044 19:34:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.044 19:34:01 -- setup/common.sh@18 -- # local node=1 00:03:20.044 19:34:01 -- setup/common.sh@19 -- # local var val 00:03:20.044 19:34:01 -- setup/common.sh@20 -- # local mem_f mem 00:03:20.044 19:34:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.044 19:34:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.044 19:34:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.044 19:34:01 -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.044 19:34:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 13933776 kB' 'MemUsed: 13778048 kB' 'SwapCached: 0 kB' 'Active: 7532260 kB' 'Inactive: 4378208 kB' 'Active(anon): 7318544 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4378208 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11497648 kB' 'Mapped: 179340 kB' 'AnonPages: 412988 kB' 'Shmem: 6905724 kB' 'KernelStack: 5336 kB' 'PageTables: 4856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 259656 kB' 'Slab: 415680 kB' 'SReclaimable: 259656 kB' 'SUnreclaim: 156024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.044 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.044 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # continue 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # IFS=': ' 00:03:20.045 19:34:01 -- setup/common.sh@31 -- # read -r var val _ 00:03:20.045 19:34:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.045 19:34:01 -- setup/common.sh@33 -- # echo 0 00:03:20.045 19:34:01 -- setup/common.sh@33 -- # return 0 00:03:20.045 19:34:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.045 19:34:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.045 19:34:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.045 19:34:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:20.045 node0=512 expecting 513 00:03:20.045 19:34:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.045 19:34:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.045 19:34:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.045 19:34:01 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:20.045 node1=513 expecting 512 00:03:20.045 19:34:01 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:20.045 00:03:20.045 real 0m1.360s 00:03:20.045 user 0m0.568s 00:03:20.045 sys 0m0.747s 00:03:20.045 19:34:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:20.045 19:34:01 -- common/autotest_common.sh@10 -- # set +x 00:03:20.045 ************************************ 00:03:20.045 END TEST odd_alloc 00:03:20.045 ************************************ 00:03:20.045 19:34:01 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:20.045 19:34:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.045 19:34:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.045 19:34:01 -- common/autotest_common.sh@10 -- # set +x 00:03:20.045 ************************************ 00:03:20.045 START TEST custom_alloc 00:03:20.045 ************************************ 00:03:20.045 19:34:01 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:20.045 19:34:01 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:20.045 19:34:01 -- setup/hugepages.sh@169 -- # local node 00:03:20.045 19:34:01 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:20.045 19:34:01 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:20.045 19:34:01 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:20.045 19:34:01 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:20.045 19:34:01 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.045 19:34:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.045 19:34:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.045 19:34:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.045 19:34:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.045 19:34:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.045 19:34:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.045 19:34:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.045 19:34:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.045 19:34:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.045 19:34:01 -- setup/hugepages.sh@83 -- # : 256 00:03:20.045 19:34:01 -- setup/hugepages.sh@84 -- # : 1 00:03:20.045 19:34:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.045 19:34:01 -- setup/hugepages.sh@83 -- # : 0 00:03:20.045 19:34:01 -- setup/hugepages.sh@84 -- # : 0 00:03:20.045 19:34:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:20.045 19:34:01 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:20.045 19:34:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.045 19:34:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.045 19:34:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.045 19:34:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.045 19:34:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.045 19:34:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.045 19:34:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.045 19:34:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.045 19:34:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.045 19:34:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.045 19:34:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.045 19:34:01 -- setup/hugepages.sh@78 -- # return 0 00:03:20.045 19:34:01 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:20.045 19:34:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.045 19:34:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.045 19:34:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.045 19:34:01 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.045 19:34:01 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:20.045 19:34:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.045 19:34:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.045 19:34:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.045 19:34:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.045 19:34:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.045 19:34:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.045 19:34:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:20.045 19:34:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.045 19:34:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.045 19:34:01 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.045 19:34:01 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:20.045 19:34:01 -- setup/hugepages.sh@78 -- # return 0 00:03:20.045 19:34:01 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:20.045 19:34:01 -- setup/hugepages.sh@187 -- # setup output 00:03:20.045 19:34:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.045 19:34:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.455 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.456 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.456 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.456 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.456 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.456 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.456 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.456 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.456 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.456 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.456 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.456 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.456 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.456 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.456 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.456 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.456 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.456 19:34:02 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:21.456 19:34:02 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:21.456 19:34:02 -- setup/hugepages.sh@89 -- # local node 00:03:21.456 19:34:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.456 19:34:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.456 19:34:02 -- setup/hugepages.sh@92 -- # local surp 00:03:21.456 19:34:02 -- setup/hugepages.sh@93 -- # local resv 00:03:21.456 19:34:02 -- setup/hugepages.sh@94 -- # local anon 00:03:21.456 19:34:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.456 19:34:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.456 19:34:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.456 19:34:02 -- setup/common.sh@18 -- # local node= 00:03:21.456 19:34:02 -- setup/common.sh@19 -- # local var val 00:03:21.456 19:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.456 19:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.456 19:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.456 19:34:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.456 19:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.456 19:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35804832 kB' 'MemAvailable: 40961848 kB' 'Buffers: 2696 kB' 'Cached: 18798816 kB' 'SwapCached: 0 kB' 'Active: 14713864 kB' 'Inactive: 4646328 kB' 'Active(anon): 14099872 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561924 kB' 'Mapped: 239316 kB' 'Shmem: 13541192 kB' 'KReclaimable: 541132 kB' 'Slab: 932700 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391568 kB' 'KernelStack: 12880 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15272276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196776 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.456 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.456 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.457 19:34:02 -- setup/common.sh@33 -- # echo 0 00:03:21.457 19:34:02 -- setup/common.sh@33 -- # return 0 00:03:21.457 19:34:02 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.457 19:34:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.457 19:34:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.457 19:34:02 -- setup/common.sh@18 -- # local node= 00:03:21.457 19:34:02 -- setup/common.sh@19 -- # local var val 00:03:21.457 19:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.457 19:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.457 19:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.457 19:34:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.457 19:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.457 19:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35804204 kB' 'MemAvailable: 40961220 kB' 'Buffers: 2696 kB' 'Cached: 18798816 kB' 'SwapCached: 0 kB' 'Active: 14714548 kB' 'Inactive: 4646328 kB' 'Active(anon): 14100556 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562640 kB' 'Mapped: 239316 kB' 'Shmem: 13541192 kB' 'KReclaimable: 541132 kB' 'Slab: 932692 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391560 kB' 'KernelStack: 12896 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15272288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196744 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.457 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.457 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.458 19:34:02 -- setup/common.sh@33 -- # echo 0 00:03:21.458 19:34:02 -- setup/common.sh@33 -- # return 0 00:03:21.458 19:34:02 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.458 19:34:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.458 19:34:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.458 19:34:02 -- setup/common.sh@18 -- # local node= 00:03:21.458 19:34:02 -- setup/common.sh@19 -- # local var val 00:03:21.458 19:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.458 19:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.458 19:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.458 19:34:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.458 19:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.458 19:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.458 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.458 19:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35805028 kB' 'MemAvailable: 40962044 kB' 'Buffers: 2696 kB' 'Cached: 18798828 kB' 'SwapCached: 0 kB' 'Active: 14714196 kB' 'Inactive: 4646328 kB' 'Active(anon): 14100204 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562240 kB' 'Mapped: 239316 kB' 'Shmem: 13541204 kB' 'KReclaimable: 541132 kB' 'Slab: 932684 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391552 kB' 'KernelStack: 12880 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15272304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196744 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.459 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.459 19:34:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.460 19:34:02 -- setup/common.sh@33 -- # echo 0 00:03:21.460 19:34:02 -- setup/common.sh@33 -- # return 0 00:03:21.460 19:34:02 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.460 19:34:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:21.460 nr_hugepages=1536 00:03:21.460 19:34:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.460 resv_hugepages=0 00:03:21.460 19:34:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.460 surplus_hugepages=0 00:03:21.460 19:34:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.460 anon_hugepages=0 00:03:21.460 19:34:02 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:21.460 19:34:02 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:21.460 19:34:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.460 19:34:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.460 19:34:02 -- setup/common.sh@18 -- # local node= 00:03:21.460 19:34:02 -- setup/common.sh@19 -- # local var val 00:03:21.460 19:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.460 19:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.460 19:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.460 19:34:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.460 19:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.460 19:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 35805068 kB' 'MemAvailable: 40962084 kB' 'Buffers: 2696 kB' 'Cached: 18798844 kB' 'SwapCached: 0 kB' 'Active: 14714288 kB' 'Inactive: 4646328 kB' 'Active(anon): 14100296 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562344 kB' 'Mapped: 239316 kB' 'Shmem: 13541220 kB' 'KReclaimable: 541132 kB' 'Slab: 932684 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391552 kB' 'KernelStack: 12896 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 15272316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196760 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.460 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.460 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.461 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.461 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.461 19:34:02 -- setup/common.sh@33 -- # echo 1536 00:03:21.461 19:34:02 -- setup/common.sh@33 -- # return 0 00:03:21.461 19:34:02 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:21.461 19:34:02 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.461 19:34:02 -- setup/hugepages.sh@27 -- # local node 00:03:21.461 19:34:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.461 19:34:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.461 19:34:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.461 19:34:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.461 19:34:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.461 19:34:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.461 19:34:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.461 19:34:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.461 19:34:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.461 19:34:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.461 19:34:02 -- setup/common.sh@18 -- # local node=0 00:03:21.461 19:34:02 -- setup/common.sh@19 -- # local var val 00:03:21.461 19:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.461 19:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.461 19:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.461 19:34:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.461 19:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.461 19:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22918820 kB' 'MemUsed: 9911064 kB' 'SwapCached: 0 kB' 'Active: 7182064 kB' 'Inactive: 268120 kB' 'Active(anon): 6781788 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 268120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7303864 kB' 'Mapped: 59968 kB' 'AnonPages: 149452 kB' 'Shmem: 6635468 kB' 'KernelStack: 7576 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281476 kB' 'Slab: 517068 kB' 'SReclaimable: 281476 kB' 'SUnreclaim: 235592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.462 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.462 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.462 19:34:02 -- setup/common.sh@33 -- # echo 0 00:03:21.462 19:34:02 -- setup/common.sh@33 -- # return 0 00:03:21.462 19:34:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.462 19:34:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.462 19:34:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.463 19:34:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.463 19:34:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.463 19:34:02 -- setup/common.sh@18 -- # local node=1 00:03:21.463 19:34:02 -- setup/common.sh@19 -- # local var val 00:03:21.463 19:34:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.463 19:34:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.463 19:34:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.463 19:34:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.463 19:34:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.463 19:34:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 12886248 kB' 'MemUsed: 14825576 kB' 'SwapCached: 0 kB' 'Active: 7532212 kB' 'Inactive: 4378208 kB' 'Active(anon): 7318496 kB' 'Inactive(anon): 0 kB' 'Active(file): 213716 kB' 'Inactive(file): 4378208 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11497692 kB' 'Mapped: 179348 kB' 'AnonPages: 412856 kB' 'Shmem: 6905768 kB' 'KernelStack: 5304 kB' 'PageTables: 4848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 259656 kB' 'Slab: 415616 kB' 'SReclaimable: 259656 kB' 'SUnreclaim: 155960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.463 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.463 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.464 19:34:02 -- setup/common.sh@32 -- # continue 00:03:21.464 19:34:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.464 19:34:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.464 19:34:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.464 19:34:02 -- setup/common.sh@33 -- # echo 0 00:03:21.464 19:34:02 -- setup/common.sh@33 -- # return 0 00:03:21.464 19:34:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.464 19:34:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.464 19:34:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.464 19:34:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.464 19:34:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:21.464 node0=512 expecting 512 00:03:21.464 19:34:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.464 19:34:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.464 19:34:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.464 19:34:02 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:21.464 node1=1024 expecting 1024 00:03:21.464 19:34:02 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:21.464 00:03:21.464 real 0m1.414s 00:03:21.464 user 0m0.631s 00:03:21.464 sys 0m0.733s 00:03:21.464 19:34:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:21.464 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:03:21.464 ************************************ 00:03:21.464 END TEST custom_alloc 00:03:21.464 ************************************ 00:03:21.464 19:34:02 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:21.464 19:34:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:21.464 19:34:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:21.464 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:03:21.722 ************************************ 00:03:21.722 START TEST no_shrink_alloc 00:03:21.722 ************************************ 00:03:21.722 19:34:02 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:21.722 19:34:02 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:21.722 19:34:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.722 19:34:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:21.722 19:34:02 -- setup/hugepages.sh@51 -- # shift 00:03:21.722 19:34:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:21.722 19:34:02 -- setup/hugepages.sh@52 -- # local node_ids 00:03:21.722 19:34:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.722 19:34:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.722 19:34:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:21.722 19:34:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:21.722 19:34:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.722 19:34:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.722 19:34:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.722 19:34:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.722 19:34:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.722 19:34:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:21.722 19:34:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:21.722 19:34:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:21.722 19:34:02 -- setup/hugepages.sh@73 -- # return 0 00:03:21.722 19:34:02 -- setup/hugepages.sh@198 -- # setup output 00:03:21.722 19:34:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.722 19:34:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.662 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.662 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.662 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.662 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.662 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.662 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.662 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.662 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.662 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.662 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.662 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.662 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.662 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.662 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.662 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.662 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.662 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.662 19:34:04 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:22.662 19:34:04 -- setup/hugepages.sh@89 -- # local node 00:03:22.662 19:34:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.662 19:34:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.662 19:34:04 -- setup/hugepages.sh@92 -- # local surp 00:03:22.662 19:34:04 -- setup/hugepages.sh@93 -- # local resv 00:03:22.662 19:34:04 -- setup/hugepages.sh@94 -- # local anon 00:03:22.662 19:34:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.662 19:34:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.662 19:34:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.662 19:34:04 -- setup/common.sh@18 -- # local node= 00:03:22.662 19:34:04 -- setup/common.sh@19 -- # local var val 00:03:22.662 19:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.662 19:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.662 19:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.662 19:34:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.662 19:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.662 19:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.662 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.662 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.662 19:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36840496 kB' 'MemAvailable: 41997512 kB' 'Buffers: 2696 kB' 'Cached: 18798908 kB' 'SwapCached: 0 kB' 'Active: 14714832 kB' 'Inactive: 4646328 kB' 'Active(anon): 14100840 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562836 kB' 'Mapped: 239432 kB' 'Shmem: 13541284 kB' 'KReclaimable: 541132 kB' 'Slab: 932592 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391460 kB' 'KernelStack: 12880 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15272980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196728 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:22.662 19:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.662 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.923 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.923 19:34:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.923 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.923 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.923 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.924 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.924 19:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.924 19:34:04 -- setup/common.sh@33 -- # echo 0 00:03:22.924 19:34:04 -- setup/common.sh@33 -- # return 0 00:03:22.924 19:34:04 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.924 19:34:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.924 19:34:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.924 19:34:04 -- setup/common.sh@18 -- # local node= 00:03:22.924 19:34:04 -- setup/common.sh@19 -- # local var val 00:03:22.924 19:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.924 19:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.924 19:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.924 19:34:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.924 19:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.925 19:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36840496 kB' 'MemAvailable: 41997512 kB' 'Buffers: 2696 kB' 'Cached: 18798908 kB' 'SwapCached: 0 kB' 'Active: 14715060 kB' 'Inactive: 4646328 kB' 'Active(anon): 14101068 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563032 kB' 'Mapped: 239428 kB' 'Shmem: 13541284 kB' 'KReclaimable: 541132 kB' 'Slab: 932580 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391448 kB' 'KernelStack: 12912 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15272992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.925 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.925 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.926 19:34:04 -- setup/common.sh@33 -- # echo 0 00:03:22.926 19:34:04 -- setup/common.sh@33 -- # return 0 00:03:22.926 19:34:04 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.926 19:34:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.926 19:34:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.926 19:34:04 -- setup/common.sh@18 -- # local node= 00:03:22.926 19:34:04 -- setup/common.sh@19 -- # local var val 00:03:22.926 19:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.926 19:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.926 19:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.926 19:34:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.926 19:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.926 19:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36840496 kB' 'MemAvailable: 41997512 kB' 'Buffers: 2696 kB' 'Cached: 18798912 kB' 'SwapCached: 0 kB' 'Active: 14715224 kB' 'Inactive: 4646328 kB' 'Active(anon): 14101232 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563200 kB' 'Mapped: 239352 kB' 'Shmem: 13541288 kB' 'KReclaimable: 541132 kB' 'Slab: 932588 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391456 kB' 'KernelStack: 12912 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15273008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.926 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.926 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.927 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.927 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.927 19:34:04 -- setup/common.sh@33 -- # echo 0 00:03:22.927 19:34:04 -- setup/common.sh@33 -- # return 0 00:03:22.927 19:34:04 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.927 19:34:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.927 nr_hugepages=1024 00:03:22.927 19:34:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.927 resv_hugepages=0 00:03:22.927 19:34:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.928 surplus_hugepages=0 00:03:22.928 19:34:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.928 anon_hugepages=0 00:03:22.928 19:34:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.928 19:34:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.928 19:34:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.928 19:34:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.928 19:34:04 -- setup/common.sh@18 -- # local node= 00:03:22.928 19:34:04 -- setup/common.sh@19 -- # local var val 00:03:22.928 19:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.928 19:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.928 19:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.928 19:34:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.928 19:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.928 19:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36840496 kB' 'MemAvailable: 41997512 kB' 'Buffers: 2696 kB' 'Cached: 18798924 kB' 'SwapCached: 0 kB' 'Active: 14714308 kB' 'Inactive: 4646328 kB' 'Active(anon): 14100316 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562264 kB' 'Mapped: 239352 kB' 'Shmem: 13541300 kB' 'KReclaimable: 541132 kB' 'Slab: 932588 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391456 kB' 'KernelStack: 12896 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15273020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.928 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.928 19:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.929 19:34:04 -- setup/common.sh@33 -- # echo 1024 00:03:22.929 19:34:04 -- setup/common.sh@33 -- # return 0 00:03:22.929 19:34:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.929 19:34:04 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.929 19:34:04 -- setup/hugepages.sh@27 -- # local node 00:03:22.929 19:34:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.929 19:34:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.929 19:34:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.929 19:34:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.929 19:34:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.929 19:34:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.929 19:34:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.929 19:34:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.929 19:34:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.929 19:34:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.929 19:34:04 -- setup/common.sh@18 -- # local node=0 00:03:22.929 19:34:04 -- setup/common.sh@19 -- # local var val 00:03:22.929 19:34:04 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.929 19:34:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.929 19:34:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.929 19:34:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.929 19:34:04 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.929 19:34:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21871784 kB' 'MemUsed: 10958100 kB' 'SwapCached: 0 kB' 'Active: 7182256 kB' 'Inactive: 268120 kB' 'Active(anon): 6781980 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 268120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7303952 kB' 'Mapped: 59968 kB' 'AnonPages: 149616 kB' 'Shmem: 6635556 kB' 'KernelStack: 7624 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281476 kB' 'Slab: 517016 kB' 'SReclaimable: 281476 kB' 'SUnreclaim: 235540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.929 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.929 19:34:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # continue 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.930 19:34:04 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.930 19:34:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.930 19:34:04 -- setup/common.sh@33 -- # echo 0 00:03:22.930 19:34:04 -- setup/common.sh@33 -- # return 0 00:03:22.930 19:34:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.931 19:34:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.931 19:34:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.931 19:34:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.931 19:34:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.931 node0=1024 expecting 1024 00:03:22.931 19:34:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.931 19:34:04 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:22.931 19:34:04 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:22.931 19:34:04 -- setup/hugepages.sh@202 -- # setup output 00:03:22.931 19:34:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.931 19:34:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.867 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.867 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.867 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.867 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.867 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.867 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.867 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.867 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.867 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.867 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.867 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.867 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.867 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.867 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.867 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.867 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.867 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:24.129 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:24.129 19:34:05 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:24.129 19:34:05 -- setup/hugepages.sh@89 -- # local node 00:03:24.129 19:34:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.129 19:34:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.129 19:34:05 -- setup/hugepages.sh@92 -- # local surp 00:03:24.129 19:34:05 -- setup/hugepages.sh@93 -- # local resv 00:03:24.129 19:34:05 -- setup/hugepages.sh@94 -- # local anon 00:03:24.129 19:34:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.129 19:34:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.129 19:34:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.129 19:34:05 -- setup/common.sh@18 -- # local node= 00:03:24.129 19:34:05 -- setup/common.sh@19 -- # local var val 00:03:24.129 19:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.129 19:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.129 19:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.129 19:34:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.129 19:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.129 19:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36820568 kB' 'MemAvailable: 41977584 kB' 'Buffers: 2696 kB' 'Cached: 18798984 kB' 'SwapCached: 0 kB' 'Active: 14715348 kB' 'Inactive: 4646328 kB' 'Active(anon): 14101356 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562832 kB' 'Mapped: 239428 kB' 'Shmem: 13541360 kB' 'KReclaimable: 541132 kB' 'Slab: 932584 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391452 kB' 'KernelStack: 12896 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15273184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196744 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.129 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.129 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.130 19:34:05 -- setup/common.sh@33 -- # echo 0 00:03:24.130 19:34:05 -- setup/common.sh@33 -- # return 0 00:03:24.130 19:34:05 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.130 19:34:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.130 19:34:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.130 19:34:05 -- setup/common.sh@18 -- # local node= 00:03:24.130 19:34:05 -- setup/common.sh@19 -- # local var val 00:03:24.130 19:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.130 19:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.130 19:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.130 19:34:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.130 19:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.130 19:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36820836 kB' 'MemAvailable: 41977852 kB' 'Buffers: 2696 kB' 'Cached: 18798984 kB' 'SwapCached: 0 kB' 'Active: 14715064 kB' 'Inactive: 4646328 kB' 'Active(anon): 14101072 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562968 kB' 'Mapped: 239372 kB' 'Shmem: 13541360 kB' 'KReclaimable: 541132 kB' 'Slab: 932576 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391444 kB' 'KernelStack: 12880 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15273196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.130 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.130 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.131 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.131 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.132 19:34:05 -- setup/common.sh@33 -- # echo 0 00:03:24.132 19:34:05 -- setup/common.sh@33 -- # return 0 00:03:24.132 19:34:05 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.132 19:34:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.132 19:34:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.132 19:34:05 -- setup/common.sh@18 -- # local node= 00:03:24.132 19:34:05 -- setup/common.sh@19 -- # local var val 00:03:24.132 19:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.132 19:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.132 19:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.132 19:34:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.132 19:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.132 19:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36821840 kB' 'MemAvailable: 41978856 kB' 'Buffers: 2696 kB' 'Cached: 18799000 kB' 'SwapCached: 0 kB' 'Active: 14714612 kB' 'Inactive: 4646328 kB' 'Active(anon): 14100620 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562424 kB' 'Mapped: 239356 kB' 'Shmem: 13541376 kB' 'KReclaimable: 541132 kB' 'Slab: 932568 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391436 kB' 'KernelStack: 12912 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15273212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196664 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.132 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.132 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.133 19:34:05 -- setup/common.sh@33 -- # echo 0 00:03:24.133 19:34:05 -- setup/common.sh@33 -- # return 0 00:03:24.133 19:34:05 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.133 19:34:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.133 nr_hugepages=1024 00:03:24.133 19:34:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.133 resv_hugepages=0 00:03:24.133 19:34:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.133 surplus_hugepages=0 00:03:24.133 19:34:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.133 anon_hugepages=0 00:03:24.133 19:34:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.133 19:34:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.133 19:34:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.133 19:34:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.133 19:34:05 -- setup/common.sh@18 -- # local node= 00:03:24.133 19:34:05 -- setup/common.sh@19 -- # local var val 00:03:24.133 19:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.133 19:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.133 19:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.133 19:34:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.133 19:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.133 19:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 36821840 kB' 'MemAvailable: 41978856 kB' 'Buffers: 2696 kB' 'Cached: 18799012 kB' 'SwapCached: 0 kB' 'Active: 14715100 kB' 'Inactive: 4646328 kB' 'Active(anon): 14101108 kB' 'Inactive(anon): 0 kB' 'Active(file): 613992 kB' 'Inactive(file): 4646328 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562904 kB' 'Mapped: 239356 kB' 'Shmem: 13541388 kB' 'KReclaimable: 541132 kB' 'Slab: 932568 kB' 'SReclaimable: 541132 kB' 'SUnreclaim: 391436 kB' 'KernelStack: 12928 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 15274124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 43776 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1760860 kB' 'DirectMap2M: 26470400 kB' 'DirectMap1G: 40894464 kB' 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.133 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.133 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.134 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.134 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.135 19:34:05 -- setup/common.sh@33 -- # echo 1024 00:03:24.135 19:34:05 -- setup/common.sh@33 -- # return 0 00:03:24.135 19:34:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.135 19:34:05 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.135 19:34:05 -- setup/hugepages.sh@27 -- # local node 00:03:24.135 19:34:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.135 19:34:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.135 19:34:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.135 19:34:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:24.135 19:34:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.135 19:34:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.135 19:34:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.135 19:34:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.135 19:34:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.135 19:34:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.135 19:34:05 -- setup/common.sh@18 -- # local node=0 00:03:24.135 19:34:05 -- setup/common.sh@19 -- # local var val 00:03:24.135 19:34:05 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.135 19:34:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.135 19:34:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.135 19:34:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.135 19:34:05 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.135 19:34:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21863080 kB' 'MemUsed: 10966804 kB' 'SwapCached: 0 kB' 'Active: 7182604 kB' 'Inactive: 268120 kB' 'Active(anon): 6782328 kB' 'Inactive(anon): 0 kB' 'Active(file): 400276 kB' 'Inactive(file): 268120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7304028 kB' 'Mapped: 59968 kB' 'AnonPages: 149872 kB' 'Shmem: 6635632 kB' 'KernelStack: 7608 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 281476 kB' 'Slab: 517008 kB' 'SReclaimable: 281476 kB' 'SUnreclaim: 235532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.135 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.135 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # continue 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.136 19:34:05 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.136 19:34:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.136 19:34:05 -- setup/common.sh@33 -- # echo 0 00:03:24.136 19:34:05 -- setup/common.sh@33 -- # return 0 00:03:24.136 19:34:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.136 19:34:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.136 19:34:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.136 19:34:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.136 19:34:05 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:24.136 node0=1024 expecting 1024 00:03:24.136 19:34:05 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:24.136 00:03:24.136 real 0m2.609s 00:03:24.136 user 0m1.002s 00:03:24.136 sys 0m1.510s 00:03:24.136 19:34:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.136 19:34:05 -- common/autotest_common.sh@10 -- # set +x 00:03:24.136 ************************************ 00:03:24.136 END TEST no_shrink_alloc 00:03:24.136 ************************************ 00:03:24.136 19:34:05 -- setup/hugepages.sh@217 -- # clear_hp 00:03:24.136 19:34:05 -- setup/hugepages.sh@37 -- # local node hp 00:03:24.136 19:34:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.136 19:34:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.136 19:34:05 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.136 19:34:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.136 19:34:05 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.136 19:34:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:24.136 19:34:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.136 19:34:05 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.136 19:34:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:24.136 19:34:05 -- setup/hugepages.sh@41 -- # echo 0 00:03:24.136 19:34:05 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:24.136 19:34:05 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:24.136 00:03:24.136 real 0m11.551s 00:03:24.136 user 0m4.393s 00:03:24.136 sys 0m5.891s 00:03:24.136 19:34:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.136 19:34:05 -- common/autotest_common.sh@10 -- # set +x 00:03:24.136 ************************************ 00:03:24.136 END TEST hugepages 00:03:24.136 ************************************ 00:03:24.394 19:34:05 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:24.394 19:34:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.394 19:34:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.394 19:34:05 -- common/autotest_common.sh@10 -- # set +x 00:03:24.394 ************************************ 00:03:24.394 START TEST driver 00:03:24.394 ************************************ 00:03:24.394 19:34:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:24.394 * Looking for test storage... 00:03:24.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:24.394 19:34:05 -- setup/driver.sh@68 -- # setup reset 00:03:24.394 19:34:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.394 19:34:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.944 19:34:08 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:26.944 19:34:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.944 19:34:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.944 19:34:08 -- common/autotest_common.sh@10 -- # set +x 00:03:26.944 ************************************ 00:03:26.944 START TEST guess_driver 00:03:26.944 ************************************ 00:03:26.944 19:34:08 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:26.944 19:34:08 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:26.944 19:34:08 -- setup/driver.sh@47 -- # local fail=0 00:03:26.944 19:34:08 -- setup/driver.sh@49 -- # pick_driver 00:03:26.944 19:34:08 -- setup/driver.sh@36 -- # vfio 00:03:26.944 19:34:08 -- setup/driver.sh@21 -- # local iommu_grups 00:03:26.944 19:34:08 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:26.944 19:34:08 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:26.944 19:34:08 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:26.944 19:34:08 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:26.944 19:34:08 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:26.944 19:34:08 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:26.944 19:34:08 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:26.944 19:34:08 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:26.944 19:34:08 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:26.944 19:34:08 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:26.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:26.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:26.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:26.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:26.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:26.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:26.944 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:26.944 19:34:08 -- setup/driver.sh@30 -- # return 0 00:03:26.944 19:34:08 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:26.944 19:34:08 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:26.944 19:34:08 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:26.944 19:34:08 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:26.944 Looking for driver=vfio-pci 00:03:26.944 19:34:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.944 19:34:08 -- setup/driver.sh@45 -- # setup output config 00:03:26.944 19:34:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.944 19:34:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.880 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.880 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.880 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.880 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.880 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.880 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.880 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.880 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.880 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.880 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.880 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.880 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.880 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.880 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.880 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.880 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.880 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.880 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.880 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.880 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.881 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.881 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.881 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.881 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.881 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.881 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.881 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.881 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.881 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.881 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.881 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.881 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.881 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.881 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.881 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.881 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.881 19:34:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.881 19:34:09 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.881 19:34:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:28.819 19:34:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:28.819 19:34:10 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:28.819 19:34:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.078 19:34:10 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:29.078 19:34:10 -- setup/driver.sh@65 -- # setup reset 00:03:29.078 19:34:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.078 19:34:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.613 00:03:31.613 real 0m4.670s 00:03:31.613 user 0m1.040s 00:03:31.613 sys 0m1.772s 00:03:31.613 19:34:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:31.613 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:03:31.613 ************************************ 00:03:31.613 END TEST guess_driver 00:03:31.613 ************************************ 00:03:31.613 00:03:31.613 real 0m7.136s 00:03:31.613 user 0m1.573s 00:03:31.613 sys 0m2.742s 00:03:31.613 19:34:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:31.613 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:03:31.613 ************************************ 00:03:31.613 END TEST driver 00:03:31.613 ************************************ 00:03:31.613 19:34:12 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:31.613 19:34:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:31.613 19:34:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:31.613 19:34:12 -- common/autotest_common.sh@10 -- # set +x 00:03:31.613 ************************************ 00:03:31.613 START TEST devices 00:03:31.613 ************************************ 00:03:31.613 19:34:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:31.613 * Looking for test storage... 00:03:31.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.613 19:34:13 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:31.613 19:34:13 -- setup/devices.sh@192 -- # setup reset 00:03:31.613 19:34:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.613 19:34:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.988 19:34:14 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:32.988 19:34:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:32.988 19:34:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:32.988 19:34:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:32.989 19:34:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:32.989 19:34:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:32.989 19:34:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:32.989 19:34:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:32.989 19:34:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:32.989 19:34:14 -- setup/devices.sh@196 -- # blocks=() 00:03:32.989 19:34:14 -- setup/devices.sh@196 -- # declare -a blocks 00:03:32.989 19:34:14 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:32.989 19:34:14 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:32.989 19:34:14 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:32.989 19:34:14 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:32.989 19:34:14 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:32.989 19:34:14 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:32.989 19:34:14 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:32.989 19:34:14 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:32.989 19:34:14 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:32.989 19:34:14 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:32.989 19:34:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:32.989 No valid GPT data, bailing 00:03:32.989 19:34:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:32.989 19:34:14 -- scripts/common.sh@391 -- # pt= 00:03:32.989 19:34:14 -- scripts/common.sh@392 -- # return 1 00:03:32.989 19:34:14 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:32.989 19:34:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:32.989 19:34:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:32.989 19:34:14 -- setup/common.sh@80 -- # echo 1000204886016 00:03:32.989 19:34:14 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:32.989 19:34:14 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.989 19:34:14 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:32.989 19:34:14 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:32.989 19:34:14 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:32.989 19:34:14 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:32.989 19:34:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.989 19:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.989 19:34:14 -- common/autotest_common.sh@10 -- # set +x 00:03:33.258 ************************************ 00:03:33.258 START TEST nvme_mount 00:03:33.258 ************************************ 00:03:33.258 19:34:14 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:33.258 19:34:14 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:33.258 19:34:14 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:33.258 19:34:14 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.258 19:34:14 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.259 19:34:14 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:33.259 19:34:14 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:33.259 19:34:14 -- setup/common.sh@40 -- # local part_no=1 00:03:33.259 19:34:14 -- setup/common.sh@41 -- # local size=1073741824 00:03:33.259 19:34:14 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:33.259 19:34:14 -- setup/common.sh@44 -- # parts=() 00:03:33.259 19:34:14 -- setup/common.sh@44 -- # local parts 00:03:33.259 19:34:14 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:33.259 19:34:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.259 19:34:14 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:33.259 19:34:14 -- setup/common.sh@46 -- # (( part++ )) 00:03:33.259 19:34:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:33.259 19:34:14 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:33.259 19:34:14 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:33.259 19:34:14 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:34.201 Creating new GPT entries in memory. 00:03:34.201 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:34.201 other utilities. 00:03:34.201 19:34:15 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:34.201 19:34:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.201 19:34:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:34.201 19:34:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:34.201 19:34:15 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:35.138 Creating new GPT entries in memory. 00:03:35.138 The operation has completed successfully. 00:03:35.138 19:34:16 -- setup/common.sh@57 -- # (( part++ )) 00:03:35.138 19:34:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.138 19:34:16 -- setup/common.sh@62 -- # wait 1568898 00:03:35.138 19:34:16 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.138 19:34:16 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:35.138 19:34:16 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.138 19:34:16 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:35.138 19:34:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:35.408 19:34:16 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.408 19:34:16 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.408 19:34:16 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:35.408 19:34:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:35.408 19:34:16 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.408 19:34:16 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.408 19:34:16 -- setup/devices.sh@53 -- # local found=0 00:03:35.408 19:34:16 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.408 19:34:16 -- setup/devices.sh@56 -- # : 00:03:35.408 19:34:16 -- setup/devices.sh@59 -- # local pci status 00:03:35.408 19:34:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.408 19:34:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:35.408 19:34:16 -- setup/devices.sh@47 -- # setup output config 00:03:35.408 19:34:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.408 19:34:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.379 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.379 19:34:17 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:36.379 19:34:17 -- setup/devices.sh@63 -- # found=1 00:03:36.379 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.379 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.380 19:34:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.380 19:34:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.639 19:34:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.639 19:34:17 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:36.639 19:34:17 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.639 19:34:17 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.639 19:34:17 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.639 19:34:17 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:36.639 19:34:17 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.639 19:34:17 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.639 19:34:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:36.639 19:34:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:36.639 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:36.640 19:34:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:36.640 19:34:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:36.898 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:36.898 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:36.898 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:36.898 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:36.898 19:34:18 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:36.898 19:34:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:36.898 19:34:18 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.898 19:34:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:36.898 19:34:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:36.898 19:34:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.898 19:34:18 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.898 19:34:18 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:36.898 19:34:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:36.898 19:34:18 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.898 19:34:18 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.898 19:34:18 -- setup/devices.sh@53 -- # local found=0 00:03:36.898 19:34:18 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.898 19:34:18 -- setup/devices.sh@56 -- # : 00:03:36.898 19:34:18 -- setup/devices.sh@59 -- # local pci status 00:03:36.898 19:34:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.898 19:34:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:36.898 19:34:18 -- setup/devices.sh@47 -- # setup output config 00:03:36.898 19:34:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.898 19:34:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:37.835 19:34:19 -- setup/devices.sh@63 -- # found=1 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.835 19:34:19 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:37.835 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.094 19:34:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.094 19:34:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:38.094 19:34:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.094 19:34:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:38.094 19:34:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:38.094 19:34:19 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.094 19:34:19 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:38.094 19:34:19 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:38.094 19:34:19 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:38.094 19:34:19 -- setup/devices.sh@50 -- # local mount_point= 00:03:38.094 19:34:19 -- setup/devices.sh@51 -- # local test_file= 00:03:38.094 19:34:19 -- setup/devices.sh@53 -- # local found=0 00:03:38.094 19:34:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:38.094 19:34:19 -- setup/devices.sh@59 -- # local pci status 00:03:38.095 19:34:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.095 19:34:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:38.095 19:34:19 -- setup/devices.sh@47 -- # setup output config 00:03:38.095 19:34:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.095 19:34:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:39.473 19:34:20 -- setup/devices.sh@63 -- # found=1 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.473 19:34:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.473 19:34:20 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:39.473 19:34:20 -- setup/devices.sh@68 -- # return 0 00:03:39.473 19:34:20 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:39.473 19:34:20 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.473 19:34:20 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.473 19:34:20 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:39.473 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:39.473 00:03:39.473 real 0m6.171s 00:03:39.473 user 0m1.395s 00:03:39.473 sys 0m2.330s 00:03:39.473 19:34:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:39.473 19:34:20 -- common/autotest_common.sh@10 -- # set +x 00:03:39.473 ************************************ 00:03:39.473 END TEST nvme_mount 00:03:39.473 ************************************ 00:03:39.473 19:34:20 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:39.473 19:34:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:39.473 19:34:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:39.473 19:34:20 -- common/autotest_common.sh@10 -- # set +x 00:03:39.473 ************************************ 00:03:39.473 START TEST dm_mount 00:03:39.473 ************************************ 00:03:39.473 19:34:20 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:39.473 19:34:20 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:39.473 19:34:20 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:39.473 19:34:20 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:39.473 19:34:20 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:39.473 19:34:20 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:39.473 19:34:20 -- setup/common.sh@40 -- # local part_no=2 00:03:39.473 19:34:20 -- setup/common.sh@41 -- # local size=1073741824 00:03:39.473 19:34:20 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:39.473 19:34:20 -- setup/common.sh@44 -- # parts=() 00:03:39.473 19:34:20 -- setup/common.sh@44 -- # local parts 00:03:39.473 19:34:20 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:39.473 19:34:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.473 19:34:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:39.473 19:34:20 -- setup/common.sh@46 -- # (( part++ )) 00:03:39.473 19:34:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.473 19:34:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:39.473 19:34:20 -- setup/common.sh@46 -- # (( part++ )) 00:03:39.473 19:34:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:39.473 19:34:20 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:39.473 19:34:20 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:39.473 19:34:20 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:40.414 Creating new GPT entries in memory. 00:03:40.414 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:40.414 other utilities. 00:03:40.414 19:34:21 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:40.414 19:34:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.414 19:34:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:40.414 19:34:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:40.414 19:34:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:41.795 Creating new GPT entries in memory. 00:03:41.795 The operation has completed successfully. 00:03:41.795 19:34:22 -- setup/common.sh@57 -- # (( part++ )) 00:03:41.795 19:34:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.795 19:34:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:41.795 19:34:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:41.795 19:34:22 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:42.753 The operation has completed successfully. 00:03:42.753 19:34:23 -- setup/common.sh@57 -- # (( part++ )) 00:03:42.753 19:34:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.753 19:34:23 -- setup/common.sh@62 -- # wait 1571292 00:03:42.753 19:34:23 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:42.753 19:34:23 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.753 19:34:23 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.753 19:34:23 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:42.753 19:34:23 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:42.753 19:34:23 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.753 19:34:23 -- setup/devices.sh@161 -- # break 00:03:42.753 19:34:23 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.753 19:34:23 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:42.753 19:34:23 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:42.753 19:34:23 -- setup/devices.sh@166 -- # dm=dm-0 00:03:42.753 19:34:23 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:42.753 19:34:23 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:42.753 19:34:23 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.753 19:34:23 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:42.753 19:34:23 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.753 19:34:23 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:42.753 19:34:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:42.753 19:34:24 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.753 19:34:24 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.753 19:34:24 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:42.753 19:34:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:42.753 19:34:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.753 19:34:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.753 19:34:24 -- setup/devices.sh@53 -- # local found=0 00:03:42.753 19:34:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:42.753 19:34:24 -- setup/devices.sh@56 -- # : 00:03:42.753 19:34:24 -- setup/devices.sh@59 -- # local pci status 00:03:42.753 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.753 19:34:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:42.753 19:34:24 -- setup/devices.sh@47 -- # setup output config 00:03:42.753 19:34:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.753 19:34:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:43.693 19:34:24 -- setup/devices.sh@63 -- # found=1 00:03:43.693 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.693 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.693 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.693 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.693 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.693 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.693 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.693 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.693 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.694 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.694 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.694 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.694 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.694 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.694 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.694 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.694 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.694 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.694 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.694 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.694 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.694 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.694 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.694 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.694 19:34:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.694 19:34:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.694 19:34:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.694 19:34:25 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:43.694 19:34:25 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.694 19:34:25 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:43.694 19:34:25 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:43.694 19:34:25 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.952 19:34:25 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:43.952 19:34:25 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:43.952 19:34:25 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:43.952 19:34:25 -- setup/devices.sh@50 -- # local mount_point= 00:03:43.952 19:34:25 -- setup/devices.sh@51 -- # local test_file= 00:03:43.952 19:34:25 -- setup/devices.sh@53 -- # local found=0 00:03:43.952 19:34:25 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.952 19:34:25 -- setup/devices.sh@59 -- # local pci status 00:03:43.952 19:34:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.952 19:34:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:43.952 19:34:25 -- setup/devices.sh@47 -- # setup output config 00:03:43.952 19:34:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.952 19:34:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:44.889 19:34:26 -- setup/devices.sh@63 -- # found=1 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.889 19:34:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.889 19:34:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.889 19:34:26 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:44.889 19:34:26 -- setup/devices.sh@68 -- # return 0 00:03:44.889 19:34:26 -- setup/devices.sh@187 -- # cleanup_dm 00:03:44.889 19:34:26 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.889 19:34:26 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:44.889 19:34:26 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:45.149 19:34:26 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.149 19:34:26 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:45.149 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.149 19:34:26 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:45.149 19:34:26 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:45.149 00:03:45.149 real 0m5.547s 00:03:45.149 user 0m0.870s 00:03:45.149 sys 0m1.508s 00:03:45.149 19:34:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:45.149 19:34:26 -- common/autotest_common.sh@10 -- # set +x 00:03:45.149 ************************************ 00:03:45.149 END TEST dm_mount 00:03:45.149 ************************************ 00:03:45.149 19:34:26 -- setup/devices.sh@1 -- # cleanup 00:03:45.149 19:34:26 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:45.149 19:34:26 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.149 19:34:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.149 19:34:26 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:45.149 19:34:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.149 19:34:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:45.410 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:45.410 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:45.410 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:45.410 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:45.410 19:34:26 -- setup/devices.sh@12 -- # cleanup_dm 00:03:45.410 19:34:26 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:45.410 19:34:26 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:45.410 19:34:26 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.410 19:34:26 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:45.410 19:34:26 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.410 19:34:26 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:45.410 00:03:45.410 real 0m13.730s 00:03:45.410 user 0m2.941s 00:03:45.410 sys 0m4.907s 00:03:45.410 19:34:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:45.410 19:34:26 -- common/autotest_common.sh@10 -- # set +x 00:03:45.410 ************************************ 00:03:45.410 END TEST devices 00:03:45.410 ************************************ 00:03:45.410 00:03:45.410 real 0m43.643s 00:03:45.410 user 0m12.367s 00:03:45.410 sys 0m19.279s 00:03:45.410 19:34:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:45.410 19:34:26 -- common/autotest_common.sh@10 -- # set +x 00:03:45.410 ************************************ 00:03:45.410 END TEST setup.sh 00:03:45.410 ************************************ 00:03:45.410 19:34:26 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:46.348 Hugepages 00:03:46.348 node hugesize free / total 00:03:46.348 node0 1048576kB 0 / 0 00:03:46.348 node0 2048kB 2048 / 2048 00:03:46.348 node1 1048576kB 0 / 0 00:03:46.348 node1 2048kB 0 / 0 00:03:46.348 00:03:46.348 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:46.348 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:46.348 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:46.622 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:46.623 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:46.623 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:46.623 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:46.623 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:46.623 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:46.623 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:46.623 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:46.623 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:46.623 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:46.623 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:46.623 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:46.623 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:46.623 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:46.623 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:46.623 19:34:27 -- spdk/autotest.sh@130 -- # uname -s 00:03:46.623 19:34:27 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:46.623 19:34:27 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:46.623 19:34:27 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.007 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:48.007 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:48.007 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:48.007 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:48.007 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:48.007 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:48.007 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:48.007 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:48.007 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:48.007 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:48.007 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:48.007 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:48.007 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:48.007 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:48.007 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:48.007 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:48.948 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.948 19:34:30 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:49.923 19:34:31 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:49.923 19:34:31 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:49.923 19:34:31 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:49.923 19:34:31 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:49.923 19:34:31 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:49.923 19:34:31 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:49.923 19:34:31 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:49.923 19:34:31 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:49.923 19:34:31 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:49.923 19:34:31 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:49.923 19:34:31 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:49.923 19:34:31 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.301 Waiting for block devices as requested 00:03:51.301 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:51.301 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:51.561 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:51.561 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:51.561 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:51.561 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:51.820 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:51.820 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:51.820 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:51.820 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:52.078 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:52.078 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:52.078 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:52.078 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:52.338 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:52.338 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:52.338 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:52.597 19:34:33 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:52.597 19:34:33 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:52.597 19:34:33 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:52.597 19:34:33 -- common/autotest_common.sh@1488 -- # grep 0000:88:00.0/nvme/nvme 00:03:52.597 19:34:33 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:52.597 19:34:33 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:52.597 19:34:33 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:52.597 19:34:33 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:52.597 19:34:33 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:52.597 19:34:33 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:52.597 19:34:33 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:52.597 19:34:33 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:52.597 19:34:33 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:52.597 19:34:33 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:52.597 19:34:33 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:52.597 19:34:33 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:52.597 19:34:33 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:52.597 19:34:33 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:52.597 19:34:33 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:52.597 19:34:33 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:52.597 19:34:33 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:52.597 19:34:33 -- common/autotest_common.sh@1543 -- # continue 00:03:52.597 19:34:33 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:52.597 19:34:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:52.597 19:34:33 -- common/autotest_common.sh@10 -- # set +x 00:03:52.597 19:34:33 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:52.597 19:34:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:52.597 19:34:33 -- common/autotest_common.sh@10 -- # set +x 00:03:52.597 19:34:33 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.977 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.977 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.977 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.977 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.977 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.977 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.977 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.977 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.977 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.977 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.977 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.977 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.977 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.977 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.977 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.977 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:54.918 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.918 19:34:36 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:54.918 19:34:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:54.918 19:34:36 -- common/autotest_common.sh@10 -- # set +x 00:03:54.918 19:34:36 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:54.918 19:34:36 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:54.918 19:34:36 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:54.918 19:34:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:54.918 19:34:36 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:54.918 19:34:36 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:54.918 19:34:36 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:54.918 19:34:36 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:54.918 19:34:36 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.918 19:34:36 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.918 19:34:36 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:54.918 19:34:36 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:54.918 19:34:36 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:03:54.918 19:34:36 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:54.918 19:34:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:54.918 19:34:36 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:54.918 19:34:36 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:54.918 19:34:36 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:54.918 19:34:36 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:88:00.0 00:03:54.918 19:34:36 -- common/autotest_common.sh@1578 -- # [[ -z 0000:88:00.0 ]] 00:03:54.918 19:34:36 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1576460 00:03:54.918 19:34:36 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.918 19:34:36 -- common/autotest_common.sh@1584 -- # waitforlisten 1576460 00:03:54.918 19:34:36 -- common/autotest_common.sh@817 -- # '[' -z 1576460 ']' 00:03:54.918 19:34:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.918 19:34:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:54.918 19:34:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.918 19:34:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:54.918 19:34:36 -- common/autotest_common.sh@10 -- # set +x 00:03:55.178 [2024-04-24 19:34:36.466442] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:03:55.178 [2024-04-24 19:34:36.466523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576460 ] 00:03:55.178 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.179 [2024-04-24 19:34:36.524090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.179 [2024-04-24 19:34:36.633492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.439 19:34:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:55.439 19:34:36 -- common/autotest_common.sh@850 -- # return 0 00:03:55.439 19:34:36 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:55.439 19:34:36 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:55.439 19:34:36 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:58.729 nvme0n1 00:03:58.729 19:34:39 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:58.729 [2024-04-24 19:34:40.205760] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:58.729 [2024-04-24 19:34:40.205807] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:58.729 request: 00:03:58.729 { 00:03:58.729 "nvme_ctrlr_name": "nvme0", 00:03:58.729 "password": "test", 00:03:58.729 "method": "bdev_nvme_opal_revert", 00:03:58.729 "req_id": 1 00:03:58.729 } 00:03:58.730 Got JSON-RPC error response 00:03:58.730 response: 00:03:58.730 { 00:03:58.730 "code": -32603, 00:03:58.730 "message": "Internal error" 00:03:58.730 } 00:03:58.730 19:34:40 -- common/autotest_common.sh@1590 -- # true 00:03:58.730 19:34:40 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:58.730 19:34:40 -- common/autotest_common.sh@1594 -- # killprocess 1576460 00:03:58.730 19:34:40 -- common/autotest_common.sh@936 -- # '[' -z 1576460 ']' 00:03:58.730 19:34:40 -- common/autotest_common.sh@940 -- # kill -0 1576460 00:03:58.730 19:34:40 -- common/autotest_common.sh@941 -- # uname 00:03:58.730 19:34:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:58.730 19:34:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1576460 00:03:58.988 19:34:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:58.988 19:34:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:58.988 19:34:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1576460' 00:03:58.988 killing process with pid 1576460 00:03:58.988 19:34:40 -- common/autotest_common.sh@955 -- # kill 1576460 00:03:58.988 19:34:40 -- common/autotest_common.sh@960 -- # wait 1576460 00:04:00.892 19:34:42 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:00.892 19:34:42 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:00.892 19:34:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:00.892 19:34:42 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:00.892 19:34:42 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:00.892 19:34:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:00.892 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:00.892 19:34:42 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.892 19:34:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.892 19:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.892 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:00.892 ************************************ 00:04:00.892 START TEST env 00:04:00.892 ************************************ 00:04:00.892 19:34:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.892 * Looking for test storage... 00:04:00.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:00.892 19:34:42 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.892 19:34:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.892 19:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.892 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:00.892 ************************************ 00:04:00.892 START TEST env_memory 00:04:00.892 ************************************ 00:04:00.892 19:34:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.892 00:04:00.892 00:04:00.892 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.892 http://cunit.sourceforge.net/ 00:04:00.892 00:04:00.892 00:04:00.892 Suite: memory 00:04:00.892 Test: alloc and free memory map ...[2024-04-24 19:34:42.339650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:00.892 passed 00:04:00.892 Test: mem map translation ...[2024-04-24 19:34:42.360043] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:00.892 [2024-04-24 19:34:42.360064] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:00.892 [2024-04-24 19:34:42.360124] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:00.892 [2024-04-24 19:34:42.360136] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:00.892 passed 00:04:00.892 Test: mem map registration ...[2024-04-24 19:34:42.400508] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:00.892 [2024-04-24 19:34:42.400526] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:01.152 passed 00:04:01.152 Test: mem map adjacent registrations ...passed 00:04:01.152 00:04:01.152 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.152 suites 1 1 n/a 0 0 00:04:01.152 tests 4 4 4 0 0 00:04:01.152 asserts 152 152 152 0 n/a 00:04:01.152 00:04:01.152 Elapsed time = 0.145 seconds 00:04:01.152 00:04:01.152 real 0m0.152s 00:04:01.152 user 0m0.146s 00:04:01.152 sys 0m0.006s 00:04:01.152 19:34:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:01.152 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:01.152 ************************************ 00:04:01.152 END TEST env_memory 00:04:01.152 ************************************ 00:04:01.152 19:34:42 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:01.152 19:34:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.152 19:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.152 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:01.152 ************************************ 00:04:01.152 START TEST env_vtophys 00:04:01.152 ************************************ 00:04:01.152 19:34:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:01.152 EAL: lib.eal log level changed from notice to debug 00:04:01.152 EAL: Detected lcore 0 as core 0 on socket 0 00:04:01.152 EAL: Detected lcore 1 as core 1 on socket 0 00:04:01.152 EAL: Detected lcore 2 as core 2 on socket 0 00:04:01.152 EAL: Detected lcore 3 as core 3 on socket 0 00:04:01.152 EAL: Detected lcore 4 as core 4 on socket 0 00:04:01.152 EAL: Detected lcore 5 as core 5 on socket 0 00:04:01.152 EAL: Detected lcore 6 as core 8 on socket 0 00:04:01.152 EAL: Detected lcore 7 as core 9 on socket 0 00:04:01.152 EAL: Detected lcore 8 as core 10 on socket 0 00:04:01.152 EAL: Detected lcore 9 as core 11 on socket 0 00:04:01.152 EAL: Detected lcore 10 as core 12 on socket 0 00:04:01.152 EAL: Detected lcore 11 as core 13 on socket 0 00:04:01.152 EAL: Detected lcore 12 as core 0 on socket 1 00:04:01.153 EAL: Detected lcore 13 as core 1 on socket 1 00:04:01.153 EAL: Detected lcore 14 as core 2 on socket 1 00:04:01.153 EAL: Detected lcore 15 as core 3 on socket 1 00:04:01.153 EAL: Detected lcore 16 as core 4 on socket 1 00:04:01.153 EAL: Detected lcore 17 as core 5 on socket 1 00:04:01.153 EAL: Detected lcore 18 as core 8 on socket 1 00:04:01.153 EAL: Detected lcore 19 as core 9 on socket 1 00:04:01.153 EAL: Detected lcore 20 as core 10 on socket 1 00:04:01.153 EAL: Detected lcore 21 as core 11 on socket 1 00:04:01.153 EAL: Detected lcore 22 as core 12 on socket 1 00:04:01.153 EAL: Detected lcore 23 as core 13 on socket 1 00:04:01.153 EAL: Detected lcore 24 as core 0 on socket 0 00:04:01.153 EAL: Detected lcore 25 as core 1 on socket 0 00:04:01.153 EAL: Detected lcore 26 as core 2 on socket 0 00:04:01.153 EAL: Detected lcore 27 as core 3 on socket 0 00:04:01.153 EAL: Detected lcore 28 as core 4 on socket 0 00:04:01.153 EAL: Detected lcore 29 as core 5 on socket 0 00:04:01.153 EAL: Detected lcore 30 as core 8 on socket 0 00:04:01.153 EAL: Detected lcore 31 as core 9 on socket 0 00:04:01.153 EAL: Detected lcore 32 as core 10 on socket 0 00:04:01.153 EAL: Detected lcore 33 as core 11 on socket 0 00:04:01.153 EAL: Detected lcore 34 as core 12 on socket 0 00:04:01.153 EAL: Detected lcore 35 as core 13 on socket 0 00:04:01.153 EAL: Detected lcore 36 as core 0 on socket 1 00:04:01.153 EAL: Detected lcore 37 as core 1 on socket 1 00:04:01.153 EAL: Detected lcore 38 as core 2 on socket 1 00:04:01.153 EAL: Detected lcore 39 as core 3 on socket 1 00:04:01.153 EAL: Detected lcore 40 as core 4 on socket 1 00:04:01.153 EAL: Detected lcore 41 as core 5 on socket 1 00:04:01.153 EAL: Detected lcore 42 as core 8 on socket 1 00:04:01.153 EAL: Detected lcore 43 as core 9 on socket 1 00:04:01.153 EAL: Detected lcore 44 as core 10 on socket 1 00:04:01.153 EAL: Detected lcore 45 as core 11 on socket 1 00:04:01.153 EAL: Detected lcore 46 as core 12 on socket 1 00:04:01.153 EAL: Detected lcore 47 as core 13 on socket 1 00:04:01.153 EAL: Maximum logical cores by configuration: 128 00:04:01.153 EAL: Detected CPU lcores: 48 00:04:01.153 EAL: Detected NUMA nodes: 2 00:04:01.153 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:01.153 EAL: Detected shared linkage of DPDK 00:04:01.153 EAL: No shared files mode enabled, IPC will be disabled 00:04:01.153 EAL: Bus pci wants IOVA as 'DC' 00:04:01.153 EAL: Buses did not request a specific IOVA mode. 00:04:01.153 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:01.153 EAL: Selected IOVA mode 'VA' 00:04:01.153 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.153 EAL: Probing VFIO support... 00:04:01.153 EAL: IOMMU type 1 (Type 1) is supported 00:04:01.153 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:01.153 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:01.153 EAL: VFIO support initialized 00:04:01.153 EAL: Ask a virtual area of 0x2e000 bytes 00:04:01.153 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:01.153 EAL: Setting up physically contiguous memory... 00:04:01.153 EAL: Setting maximum number of open files to 524288 00:04:01.153 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:01.153 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:01.153 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:01.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.153 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:01.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.153 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:01.153 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:01.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.153 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:01.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.153 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:01.153 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:01.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.153 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:01.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.153 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:01.153 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:01.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.153 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:01.153 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.153 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:01.153 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:01.153 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:01.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.153 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:01.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.153 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:01.153 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:01.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.153 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:01.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.153 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:01.153 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:01.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.153 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:01.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.153 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:01.153 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:01.153 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.153 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:01.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.153 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.153 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:01.153 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:01.153 EAL: Hugepages will be freed exactly as allocated. 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: TSC frequency is ~2700000 KHz 00:04:01.153 EAL: Main lcore 0 is ready (tid=7fe07fff4a00;cpuset=[0]) 00:04:01.153 EAL: Trying to obtain current memory policy. 00:04:01.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.153 EAL: Restoring previous memory policy: 0 00:04:01.153 EAL: request: mp_malloc_sync 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: Heap on socket 0 was expanded by 2MB 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:01.153 EAL: Mem event callback 'spdk:(nil)' registered 00:04:01.153 00:04:01.153 00:04:01.153 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.153 http://cunit.sourceforge.net/ 00:04:01.153 00:04:01.153 00:04:01.153 Suite: components_suite 00:04:01.153 Test: vtophys_malloc_test ...passed 00:04:01.153 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:01.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.153 EAL: Restoring previous memory policy: 4 00:04:01.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.153 EAL: request: mp_malloc_sync 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: Heap on socket 0 was expanded by 4MB 00:04:01.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.153 EAL: request: mp_malloc_sync 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: Heap on socket 0 was shrunk by 4MB 00:04:01.153 EAL: Trying to obtain current memory policy. 00:04:01.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.153 EAL: Restoring previous memory policy: 4 00:04:01.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.153 EAL: request: mp_malloc_sync 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: Heap on socket 0 was expanded by 6MB 00:04:01.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.153 EAL: request: mp_malloc_sync 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: Heap on socket 0 was shrunk by 6MB 00:04:01.153 EAL: Trying to obtain current memory policy. 00:04:01.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.153 EAL: Restoring previous memory policy: 4 00:04:01.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.153 EAL: request: mp_malloc_sync 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: Heap on socket 0 was expanded by 10MB 00:04:01.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.153 EAL: request: mp_malloc_sync 00:04:01.153 EAL: No shared files mode enabled, IPC is disabled 00:04:01.153 EAL: Heap on socket 0 was shrunk by 10MB 00:04:01.153 EAL: Trying to obtain current memory policy. 00:04:01.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.412 EAL: Restoring previous memory policy: 4 00:04:01.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.412 EAL: request: mp_malloc_sync 00:04:01.412 EAL: No shared files mode enabled, IPC is disabled 00:04:01.412 EAL: Heap on socket 0 was expanded by 18MB 00:04:01.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.412 EAL: request: mp_malloc_sync 00:04:01.412 EAL: No shared files mode enabled, IPC is disabled 00:04:01.412 EAL: Heap on socket 0 was shrunk by 18MB 00:04:01.412 EAL: Trying to obtain current memory policy. 00:04:01.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.412 EAL: Restoring previous memory policy: 4 00:04:01.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.412 EAL: request: mp_malloc_sync 00:04:01.412 EAL: No shared files mode enabled, IPC is disabled 00:04:01.412 EAL: Heap on socket 0 was expanded by 34MB 00:04:01.412 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.412 EAL: request: mp_malloc_sync 00:04:01.412 EAL: No shared files mode enabled, IPC is disabled 00:04:01.412 EAL: Heap on socket 0 was shrunk by 34MB 00:04:01.412 EAL: Trying to obtain current memory policy. 00:04:01.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.412 EAL: Restoring previous memory policy: 4 00:04:01.413 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.413 EAL: request: mp_malloc_sync 00:04:01.413 EAL: No shared files mode enabled, IPC is disabled 00:04:01.413 EAL: Heap on socket 0 was expanded by 66MB 00:04:01.413 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.413 EAL: request: mp_malloc_sync 00:04:01.413 EAL: No shared files mode enabled, IPC is disabled 00:04:01.413 EAL: Heap on socket 0 was shrunk by 66MB 00:04:01.413 EAL: Trying to obtain current memory policy. 00:04:01.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.413 EAL: Restoring previous memory policy: 4 00:04:01.413 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.413 EAL: request: mp_malloc_sync 00:04:01.413 EAL: No shared files mode enabled, IPC is disabled 00:04:01.413 EAL: Heap on socket 0 was expanded by 130MB 00:04:01.413 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.413 EAL: request: mp_malloc_sync 00:04:01.413 EAL: No shared files mode enabled, IPC is disabled 00:04:01.413 EAL: Heap on socket 0 was shrunk by 130MB 00:04:01.413 EAL: Trying to obtain current memory policy. 00:04:01.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.413 EAL: Restoring previous memory policy: 4 00:04:01.413 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.413 EAL: request: mp_malloc_sync 00:04:01.413 EAL: No shared files mode enabled, IPC is disabled 00:04:01.413 EAL: Heap on socket 0 was expanded by 258MB 00:04:01.673 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.673 EAL: request: mp_malloc_sync 00:04:01.673 EAL: No shared files mode enabled, IPC is disabled 00:04:01.673 EAL: Heap on socket 0 was shrunk by 258MB 00:04:01.673 EAL: Trying to obtain current memory policy. 00:04:01.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.673 EAL: Restoring previous memory policy: 4 00:04:01.673 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.673 EAL: request: mp_malloc_sync 00:04:01.673 EAL: No shared files mode enabled, IPC is disabled 00:04:01.673 EAL: Heap on socket 0 was expanded by 514MB 00:04:01.933 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.933 EAL: request: mp_malloc_sync 00:04:01.933 EAL: No shared files mode enabled, IPC is disabled 00:04:01.933 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.933 EAL: Trying to obtain current memory policy. 00:04:01.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.192 EAL: Restoring previous memory policy: 4 00:04:02.192 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.192 EAL: request: mp_malloc_sync 00:04:02.192 EAL: No shared files mode enabled, IPC is disabled 00:04:02.192 EAL: Heap on socket 0 was expanded by 1026MB 00:04:02.452 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.711 EAL: request: mp_malloc_sync 00:04:02.711 EAL: No shared files mode enabled, IPC is disabled 00:04:02.711 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:02.711 passed 00:04:02.711 00:04:02.711 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.711 suites 1 1 n/a 0 0 00:04:02.711 tests 2 2 2 0 0 00:04:02.711 asserts 497 497 497 0 n/a 00:04:02.711 00:04:02.711 Elapsed time = 1.368 seconds 00:04:02.711 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.711 EAL: request: mp_malloc_sync 00:04:02.711 EAL: No shared files mode enabled, IPC is disabled 00:04:02.711 EAL: Heap on socket 0 was shrunk by 2MB 00:04:02.711 EAL: No shared files mode enabled, IPC is disabled 00:04:02.711 EAL: No shared files mode enabled, IPC is disabled 00:04:02.711 EAL: No shared files mode enabled, IPC is disabled 00:04:02.711 00:04:02.711 real 0m1.487s 00:04:02.711 user 0m0.847s 00:04:02.711 sys 0m0.608s 00:04:02.711 19:34:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.711 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:02.711 ************************************ 00:04:02.711 END TEST env_vtophys 00:04:02.711 ************************************ 00:04:02.711 19:34:44 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:02.711 19:34:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.711 19:34:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.711 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:02.711 ************************************ 00:04:02.711 START TEST env_pci 00:04:02.711 ************************************ 00:04:02.711 19:34:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:02.711 00:04:02.711 00:04:02.711 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.711 http://cunit.sourceforge.net/ 00:04:02.711 00:04:02.711 00:04:02.711 Suite: pci 00:04:02.711 Test: pci_hook ...[2024-04-24 19:34:44.200263] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1577499 has claimed it 00:04:02.711 EAL: Cannot find device (10000:00:01.0) 00:04:02.711 EAL: Failed to attach device on primary process 00:04:02.711 passed 00:04:02.711 00:04:02.711 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.711 suites 1 1 n/a 0 0 00:04:02.711 tests 1 1 1 0 0 00:04:02.711 asserts 25 25 25 0 n/a 00:04:02.711 00:04:02.711 Elapsed time = 0.021 seconds 00:04:02.711 00:04:02.711 real 0m0.033s 00:04:02.711 user 0m0.010s 00:04:02.711 sys 0m0.023s 00:04:02.711 19:34:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.711 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:02.711 ************************************ 00:04:02.711 END TEST env_pci 00:04:02.711 ************************************ 00:04:02.972 19:34:44 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:02.972 19:34:44 -- env/env.sh@15 -- # uname 00:04:02.972 19:34:44 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:02.972 19:34:44 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:02.972 19:34:44 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:02.972 19:34:44 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:02.972 19:34:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.972 19:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:02.972 ************************************ 00:04:02.972 START TEST env_dpdk_post_init 00:04:02.972 ************************************ 00:04:02.972 19:34:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:02.972 EAL: Detected CPU lcores: 48 00:04:02.972 EAL: Detected NUMA nodes: 2 00:04:02.972 EAL: Detected shared linkage of DPDK 00:04:02.972 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.972 EAL: Selected IOVA mode 'VA' 00:04:02.972 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.972 EAL: VFIO support initialized 00:04:02.972 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.972 EAL: Using IOMMU type 1 (Type 1) 00:04:02.972 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:02.972 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:02.972 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:03.231 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:04.172 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:07.500 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:07.500 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:07.500 Starting DPDK initialization... 00:04:07.500 Starting SPDK post initialization... 00:04:07.500 SPDK NVMe probe 00:04:07.500 Attaching to 0000:88:00.0 00:04:07.500 Attached to 0000:88:00.0 00:04:07.500 Cleaning up... 00:04:07.500 00:04:07.500 real 0m4.401s 00:04:07.500 user 0m3.268s 00:04:07.500 sys 0m0.190s 00:04:07.500 19:34:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:07.500 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:04:07.500 ************************************ 00:04:07.500 END TEST env_dpdk_post_init 00:04:07.500 ************************************ 00:04:07.500 19:34:48 -- env/env.sh@26 -- # uname 00:04:07.500 19:34:48 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:07.500 19:34:48 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.500 19:34:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.500 19:34:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.500 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:04:07.500 ************************************ 00:04:07.500 START TEST env_mem_callbacks 00:04:07.500 ************************************ 00:04:07.500 19:34:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.500 EAL: Detected CPU lcores: 48 00:04:07.500 EAL: Detected NUMA nodes: 2 00:04:07.500 EAL: Detected shared linkage of DPDK 00:04:07.500 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.500 EAL: Selected IOVA mode 'VA' 00:04:07.500 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.500 EAL: VFIO support initialized 00:04:07.500 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.500 00:04:07.500 00:04:07.500 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.500 http://cunit.sourceforge.net/ 00:04:07.500 00:04:07.500 00:04:07.500 Suite: memory 00:04:07.500 Test: test ... 00:04:07.500 register 0x200000200000 2097152 00:04:07.500 malloc 3145728 00:04:07.500 register 0x200000400000 4194304 00:04:07.500 buf 0x200000500000 len 3145728 PASSED 00:04:07.500 malloc 64 00:04:07.500 buf 0x2000004fff40 len 64 PASSED 00:04:07.500 malloc 4194304 00:04:07.500 register 0x200000800000 6291456 00:04:07.500 buf 0x200000a00000 len 4194304 PASSED 00:04:07.500 free 0x200000500000 3145728 00:04:07.500 free 0x2000004fff40 64 00:04:07.500 unregister 0x200000400000 4194304 PASSED 00:04:07.500 free 0x200000a00000 4194304 00:04:07.500 unregister 0x200000800000 6291456 PASSED 00:04:07.500 malloc 8388608 00:04:07.500 register 0x200000400000 10485760 00:04:07.500 buf 0x200000600000 len 8388608 PASSED 00:04:07.500 free 0x200000600000 8388608 00:04:07.500 unregister 0x200000400000 10485760 PASSED 00:04:07.500 passed 00:04:07.500 00:04:07.500 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.500 suites 1 1 n/a 0 0 00:04:07.500 tests 1 1 1 0 0 00:04:07.500 asserts 15 15 15 0 n/a 00:04:07.500 00:04:07.500 Elapsed time = 0.005 seconds 00:04:07.500 00:04:07.500 real 0m0.043s 00:04:07.500 user 0m0.011s 00:04:07.500 sys 0m0.032s 00:04:07.500 19:34:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:07.500 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:04:07.500 ************************************ 00:04:07.500 END TEST env_mem_callbacks 00:04:07.500 ************************************ 00:04:07.500 00:04:07.500 real 0m6.777s 00:04:07.500 user 0m4.532s 00:04:07.500 sys 0m1.226s 00:04:07.500 19:34:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:07.500 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:04:07.500 ************************************ 00:04:07.500 END TEST env 00:04:07.500 ************************************ 00:04:07.500 19:34:48 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:07.500 19:34:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.500 19:34:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.500 19:34:48 -- common/autotest_common.sh@10 -- # set +x 00:04:07.759 ************************************ 00:04:07.759 START TEST rpc 00:04:07.759 ************************************ 00:04:07.759 19:34:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:07.759 * Looking for test storage... 00:04:07.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.759 19:34:49 -- rpc/rpc.sh@65 -- # spdk_pid=1578177 00:04:07.759 19:34:49 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:07.759 19:34:49 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.759 19:34:49 -- rpc/rpc.sh@67 -- # waitforlisten 1578177 00:04:07.759 19:34:49 -- common/autotest_common.sh@817 -- # '[' -z 1578177 ']' 00:04:07.759 19:34:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.759 19:34:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:07.759 19:34:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.759 19:34:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:07.759 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:07.759 [2024-04-24 19:34:49.155275] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:07.759 [2024-04-24 19:34:49.155352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578177 ] 00:04:07.759 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.759 [2024-04-24 19:34:49.212123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.018 [2024-04-24 19:34:49.327737] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:08.018 [2024-04-24 19:34:49.327788] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1578177' to capture a snapshot of events at runtime. 00:04:08.018 [2024-04-24 19:34:49.327812] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:08.018 [2024-04-24 19:34:49.327823] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:08.018 [2024-04-24 19:34:49.327833] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1578177 for offline analysis/debug. 00:04:08.018 [2024-04-24 19:34:49.327863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.276 19:34:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:08.276 19:34:49 -- common/autotest_common.sh@850 -- # return 0 00:04:08.276 19:34:49 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.276 19:34:49 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.276 19:34:49 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.276 19:34:49 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.276 19:34:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.276 19:34:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.276 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.276 ************************************ 00:04:08.276 START TEST rpc_integrity 00:04:08.276 ************************************ 00:04:08.276 19:34:49 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:08.276 19:34:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.276 19:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.276 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.276 19:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.276 19:34:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.276 19:34:49 -- rpc/rpc.sh@13 -- # jq length 00:04:08.276 19:34:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.276 19:34:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.276 19:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.276 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.276 19:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.276 19:34:49 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:08.276 19:34:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.276 19:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.276 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.276 19:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.276 19:34:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.276 { 00:04:08.276 "name": "Malloc0", 00:04:08.276 "aliases": [ 00:04:08.276 "5d8c9f22-6739-4fed-bef7-beb192b4ea45" 00:04:08.276 ], 00:04:08.276 "product_name": "Malloc disk", 00:04:08.276 "block_size": 512, 00:04:08.276 "num_blocks": 16384, 00:04:08.276 "uuid": "5d8c9f22-6739-4fed-bef7-beb192b4ea45", 00:04:08.276 "assigned_rate_limits": { 00:04:08.276 "rw_ios_per_sec": 0, 00:04:08.276 "rw_mbytes_per_sec": 0, 00:04:08.276 "r_mbytes_per_sec": 0, 00:04:08.276 "w_mbytes_per_sec": 0 00:04:08.276 }, 00:04:08.276 "claimed": false, 00:04:08.276 "zoned": false, 00:04:08.276 "supported_io_types": { 00:04:08.276 "read": true, 00:04:08.276 "write": true, 00:04:08.276 "unmap": true, 00:04:08.276 "write_zeroes": true, 00:04:08.276 "flush": true, 00:04:08.276 "reset": true, 00:04:08.276 "compare": false, 00:04:08.276 "compare_and_write": false, 00:04:08.276 "abort": true, 00:04:08.276 "nvme_admin": false, 00:04:08.276 "nvme_io": false 00:04:08.276 }, 00:04:08.276 "memory_domains": [ 00:04:08.276 { 00:04:08.277 "dma_device_id": "system", 00:04:08.277 "dma_device_type": 1 00:04:08.277 }, 00:04:08.277 { 00:04:08.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.277 "dma_device_type": 2 00:04:08.277 } 00:04:08.277 ], 00:04:08.277 "driver_specific": {} 00:04:08.277 } 00:04:08.277 ]' 00:04:08.277 19:34:49 -- rpc/rpc.sh@17 -- # jq length 00:04:08.551 19:34:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.551 19:34:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:08.551 19:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.551 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.551 [2024-04-24 19:34:49.795891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:08.551 [2024-04-24 19:34:49.795944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.551 [2024-04-24 19:34:49.795963] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc94e10 00:04:08.551 [2024-04-24 19:34:49.795975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.551 [2024-04-24 19:34:49.797417] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.551 [2024-04-24 19:34:49.797446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.551 Passthru0 00:04:08.551 19:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.551 19:34:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.551 19:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.551 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.551 19:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.551 19:34:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.551 { 00:04:08.551 "name": "Malloc0", 00:04:08.551 "aliases": [ 00:04:08.551 "5d8c9f22-6739-4fed-bef7-beb192b4ea45" 00:04:08.551 ], 00:04:08.551 "product_name": "Malloc disk", 00:04:08.551 "block_size": 512, 00:04:08.551 "num_blocks": 16384, 00:04:08.551 "uuid": "5d8c9f22-6739-4fed-bef7-beb192b4ea45", 00:04:08.551 "assigned_rate_limits": { 00:04:08.551 "rw_ios_per_sec": 0, 00:04:08.551 "rw_mbytes_per_sec": 0, 00:04:08.551 "r_mbytes_per_sec": 0, 00:04:08.551 "w_mbytes_per_sec": 0 00:04:08.551 }, 00:04:08.551 "claimed": true, 00:04:08.551 "claim_type": "exclusive_write", 00:04:08.551 "zoned": false, 00:04:08.551 "supported_io_types": { 00:04:08.551 "read": true, 00:04:08.551 "write": true, 00:04:08.551 "unmap": true, 00:04:08.551 "write_zeroes": true, 00:04:08.551 "flush": true, 00:04:08.551 "reset": true, 00:04:08.551 "compare": false, 00:04:08.551 "compare_and_write": false, 00:04:08.551 "abort": true, 00:04:08.551 "nvme_admin": false, 00:04:08.551 "nvme_io": false 00:04:08.551 }, 00:04:08.551 "memory_domains": [ 00:04:08.551 { 00:04:08.551 "dma_device_id": "system", 00:04:08.551 "dma_device_type": 1 00:04:08.551 }, 00:04:08.551 { 00:04:08.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.551 "dma_device_type": 2 00:04:08.551 } 00:04:08.551 ], 00:04:08.551 "driver_specific": {} 00:04:08.551 }, 00:04:08.551 { 00:04:08.551 "name": "Passthru0", 00:04:08.551 "aliases": [ 00:04:08.551 "adb82358-7572-519f-a863-fb88dc743365" 00:04:08.551 ], 00:04:08.551 "product_name": "passthru", 00:04:08.551 "block_size": 512, 00:04:08.551 "num_blocks": 16384, 00:04:08.551 "uuid": "adb82358-7572-519f-a863-fb88dc743365", 00:04:08.551 "assigned_rate_limits": { 00:04:08.551 "rw_ios_per_sec": 0, 00:04:08.551 "rw_mbytes_per_sec": 0, 00:04:08.551 "r_mbytes_per_sec": 0, 00:04:08.551 "w_mbytes_per_sec": 0 00:04:08.551 }, 00:04:08.551 "claimed": false, 00:04:08.551 "zoned": false, 00:04:08.551 "supported_io_types": { 00:04:08.551 "read": true, 00:04:08.551 "write": true, 00:04:08.551 "unmap": true, 00:04:08.551 "write_zeroes": true, 00:04:08.551 "flush": true, 00:04:08.551 "reset": true, 00:04:08.551 "compare": false, 00:04:08.551 "compare_and_write": false, 00:04:08.551 "abort": true, 00:04:08.551 "nvme_admin": false, 00:04:08.551 "nvme_io": false 00:04:08.551 }, 00:04:08.551 "memory_domains": [ 00:04:08.551 { 00:04:08.551 "dma_device_id": "system", 00:04:08.551 "dma_device_type": 1 00:04:08.551 }, 00:04:08.551 { 00:04:08.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.551 "dma_device_type": 2 00:04:08.551 } 00:04:08.551 ], 00:04:08.551 "driver_specific": { 00:04:08.551 "passthru": { 00:04:08.551 "name": "Passthru0", 00:04:08.551 "base_bdev_name": "Malloc0" 00:04:08.551 } 00:04:08.551 } 00:04:08.551 } 00:04:08.551 ]' 00:04:08.551 19:34:49 -- rpc/rpc.sh@21 -- # jq length 00:04:08.551 19:34:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.551 19:34:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.551 19:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.551 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.551 19:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.551 19:34:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:08.551 19:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.551 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.551 19:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.551 19:34:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.551 19:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.551 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.551 19:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.551 19:34:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.551 19:34:49 -- rpc/rpc.sh@26 -- # jq length 00:04:08.551 19:34:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.552 00:04:08.552 real 0m0.232s 00:04:08.552 user 0m0.154s 00:04:08.552 sys 0m0.013s 00:04:08.552 19:34:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.552 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.552 ************************************ 00:04:08.552 END TEST rpc_integrity 00:04:08.552 ************************************ 00:04:08.552 19:34:49 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:08.552 19:34:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.552 19:34:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.552 19:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.552 ************************************ 00:04:08.552 START TEST rpc_plugins 00:04:08.552 ************************************ 00:04:08.552 19:34:50 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:08.552 19:34:50 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:08.552 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.552 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.552 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.552 19:34:50 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:08.552 19:34:50 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:08.552 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.552 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.811 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.811 19:34:50 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:08.811 { 00:04:08.811 "name": "Malloc1", 00:04:08.811 "aliases": [ 00:04:08.811 "65432b07-6792-4619-aaad-97680c72429c" 00:04:08.811 ], 00:04:08.811 "product_name": "Malloc disk", 00:04:08.811 "block_size": 4096, 00:04:08.811 "num_blocks": 256, 00:04:08.811 "uuid": "65432b07-6792-4619-aaad-97680c72429c", 00:04:08.811 "assigned_rate_limits": { 00:04:08.811 "rw_ios_per_sec": 0, 00:04:08.811 "rw_mbytes_per_sec": 0, 00:04:08.811 "r_mbytes_per_sec": 0, 00:04:08.811 "w_mbytes_per_sec": 0 00:04:08.811 }, 00:04:08.811 "claimed": false, 00:04:08.811 "zoned": false, 00:04:08.811 "supported_io_types": { 00:04:08.811 "read": true, 00:04:08.811 "write": true, 00:04:08.811 "unmap": true, 00:04:08.811 "write_zeroes": true, 00:04:08.811 "flush": true, 00:04:08.811 "reset": true, 00:04:08.811 "compare": false, 00:04:08.811 "compare_and_write": false, 00:04:08.811 "abort": true, 00:04:08.811 "nvme_admin": false, 00:04:08.811 "nvme_io": false 00:04:08.811 }, 00:04:08.811 "memory_domains": [ 00:04:08.811 { 00:04:08.811 "dma_device_id": "system", 00:04:08.811 "dma_device_type": 1 00:04:08.811 }, 00:04:08.811 { 00:04:08.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.811 "dma_device_type": 2 00:04:08.811 } 00:04:08.811 ], 00:04:08.811 "driver_specific": {} 00:04:08.811 } 00:04:08.811 ]' 00:04:08.811 19:34:50 -- rpc/rpc.sh@32 -- # jq length 00:04:08.811 19:34:50 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:08.811 19:34:50 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:08.811 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.811 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.811 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.811 19:34:50 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:08.811 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.811 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.811 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.811 19:34:50 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:08.811 19:34:50 -- rpc/rpc.sh@36 -- # jq length 00:04:08.811 19:34:50 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:08.811 00:04:08.811 real 0m0.114s 00:04:08.811 user 0m0.071s 00:04:08.811 sys 0m0.012s 00:04:08.811 19:34:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:08.811 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.811 ************************************ 00:04:08.811 END TEST rpc_plugins 00:04:08.811 ************************************ 00:04:08.811 19:34:50 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:08.811 19:34:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.811 19:34:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.811 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.811 ************************************ 00:04:08.811 START TEST rpc_trace_cmd_test 00:04:08.811 ************************************ 00:04:08.811 19:34:50 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:08.811 19:34:50 -- rpc/rpc.sh@40 -- # local info 00:04:08.812 19:34:50 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:08.812 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.812 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:08.812 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.812 19:34:50 -- rpc/rpc.sh@42 -- # info='{ 00:04:08.812 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1578177", 00:04:08.812 "tpoint_group_mask": "0x8", 00:04:08.812 "iscsi_conn": { 00:04:08.812 "mask": "0x2", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "scsi": { 00:04:08.812 "mask": "0x4", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "bdev": { 00:04:08.812 "mask": "0x8", 00:04:08.812 "tpoint_mask": "0xffffffffffffffff" 00:04:08.812 }, 00:04:08.812 "nvmf_rdma": { 00:04:08.812 "mask": "0x10", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "nvmf_tcp": { 00:04:08.812 "mask": "0x20", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "ftl": { 00:04:08.812 "mask": "0x40", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "blobfs": { 00:04:08.812 "mask": "0x80", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "dsa": { 00:04:08.812 "mask": "0x200", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "thread": { 00:04:08.812 "mask": "0x400", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "nvme_pcie": { 00:04:08.812 "mask": "0x800", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "iaa": { 00:04:08.812 "mask": "0x1000", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "nvme_tcp": { 00:04:08.812 "mask": "0x2000", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "bdev_nvme": { 00:04:08.812 "mask": "0x4000", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 }, 00:04:08.812 "sock": { 00:04:08.812 "mask": "0x8000", 00:04:08.812 "tpoint_mask": "0x0" 00:04:08.812 } 00:04:08.812 }' 00:04:08.812 19:34:50 -- rpc/rpc.sh@43 -- # jq length 00:04:08.812 19:34:50 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:08.812 19:34:50 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.070 19:34:50 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.070 19:34:50 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.070 19:34:50 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.070 19:34:50 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.070 19:34:50 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.070 19:34:50 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.070 19:34:50 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.070 00:04:09.070 real 0m0.194s 00:04:09.070 user 0m0.170s 00:04:09.070 sys 0m0.017s 00:04:09.070 19:34:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.070 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.070 ************************************ 00:04:09.070 END TEST rpc_trace_cmd_test 00:04:09.070 ************************************ 00:04:09.070 19:34:50 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.070 19:34:50 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.070 19:34:50 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.070 19:34:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.070 19:34:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.070 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.329 ************************************ 00:04:09.329 START TEST rpc_daemon_integrity 00:04:09.329 ************************************ 00:04:09.329 19:34:50 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:09.329 19:34:50 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.329 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.329 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.329 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.329 19:34:50 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.329 19:34:50 -- rpc/rpc.sh@13 -- # jq length 00:04:09.329 19:34:50 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.329 19:34:50 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.329 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.329 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.329 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.329 19:34:50 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:09.329 19:34:50 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.329 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.329 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.329 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.329 19:34:50 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.329 { 00:04:09.329 "name": "Malloc2", 00:04:09.329 "aliases": [ 00:04:09.329 "d8aeb2a8-022b-46a7-b6dc-a67d69370adc" 00:04:09.329 ], 00:04:09.329 "product_name": "Malloc disk", 00:04:09.329 "block_size": 512, 00:04:09.329 "num_blocks": 16384, 00:04:09.329 "uuid": "d8aeb2a8-022b-46a7-b6dc-a67d69370adc", 00:04:09.329 "assigned_rate_limits": { 00:04:09.329 "rw_ios_per_sec": 0, 00:04:09.329 "rw_mbytes_per_sec": 0, 00:04:09.329 "r_mbytes_per_sec": 0, 00:04:09.329 "w_mbytes_per_sec": 0 00:04:09.329 }, 00:04:09.329 "claimed": false, 00:04:09.329 "zoned": false, 00:04:09.329 "supported_io_types": { 00:04:09.329 "read": true, 00:04:09.329 "write": true, 00:04:09.329 "unmap": true, 00:04:09.329 "write_zeroes": true, 00:04:09.329 "flush": true, 00:04:09.329 "reset": true, 00:04:09.329 "compare": false, 00:04:09.329 "compare_and_write": false, 00:04:09.329 "abort": true, 00:04:09.329 "nvme_admin": false, 00:04:09.329 "nvme_io": false 00:04:09.329 }, 00:04:09.329 "memory_domains": [ 00:04:09.329 { 00:04:09.329 "dma_device_id": "system", 00:04:09.329 "dma_device_type": 1 00:04:09.329 }, 00:04:09.329 { 00:04:09.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.329 "dma_device_type": 2 00:04:09.329 } 00:04:09.329 ], 00:04:09.329 "driver_specific": {} 00:04:09.329 } 00:04:09.329 ]' 00:04:09.329 19:34:50 -- rpc/rpc.sh@17 -- # jq length 00:04:09.329 19:34:50 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.329 19:34:50 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.329 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.329 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.329 [2024-04-24 19:34:50.694959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.329 [2024-04-24 19:34:50.695014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.329 [2024-04-24 19:34:50.695037] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc98760 00:04:09.329 [2024-04-24 19:34:50.695052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.329 [2024-04-24 19:34:50.696395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.329 [2024-04-24 19:34:50.696424] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.329 Passthru0 00:04:09.329 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.329 19:34:50 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.329 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.329 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.329 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.329 19:34:50 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.329 { 00:04:09.329 "name": "Malloc2", 00:04:09.329 "aliases": [ 00:04:09.329 "d8aeb2a8-022b-46a7-b6dc-a67d69370adc" 00:04:09.329 ], 00:04:09.329 "product_name": "Malloc disk", 00:04:09.329 "block_size": 512, 00:04:09.329 "num_blocks": 16384, 00:04:09.329 "uuid": "d8aeb2a8-022b-46a7-b6dc-a67d69370adc", 00:04:09.329 "assigned_rate_limits": { 00:04:09.329 "rw_ios_per_sec": 0, 00:04:09.329 "rw_mbytes_per_sec": 0, 00:04:09.329 "r_mbytes_per_sec": 0, 00:04:09.329 "w_mbytes_per_sec": 0 00:04:09.329 }, 00:04:09.329 "claimed": true, 00:04:09.329 "claim_type": "exclusive_write", 00:04:09.329 "zoned": false, 00:04:09.329 "supported_io_types": { 00:04:09.329 "read": true, 00:04:09.329 "write": true, 00:04:09.329 "unmap": true, 00:04:09.329 "write_zeroes": true, 00:04:09.329 "flush": true, 00:04:09.329 "reset": true, 00:04:09.329 "compare": false, 00:04:09.329 "compare_and_write": false, 00:04:09.329 "abort": true, 00:04:09.329 "nvme_admin": false, 00:04:09.329 "nvme_io": false 00:04:09.329 }, 00:04:09.329 "memory_domains": [ 00:04:09.329 { 00:04:09.329 "dma_device_id": "system", 00:04:09.329 "dma_device_type": 1 00:04:09.329 }, 00:04:09.329 { 00:04:09.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.329 "dma_device_type": 2 00:04:09.329 } 00:04:09.329 ], 00:04:09.329 "driver_specific": {} 00:04:09.329 }, 00:04:09.329 { 00:04:09.329 "name": "Passthru0", 00:04:09.329 "aliases": [ 00:04:09.329 "2d343c2c-966d-5c0a-96b9-14b1c868fb03" 00:04:09.329 ], 00:04:09.329 "product_name": "passthru", 00:04:09.329 "block_size": 512, 00:04:09.329 "num_blocks": 16384, 00:04:09.329 "uuid": "2d343c2c-966d-5c0a-96b9-14b1c868fb03", 00:04:09.329 "assigned_rate_limits": { 00:04:09.329 "rw_ios_per_sec": 0, 00:04:09.329 "rw_mbytes_per_sec": 0, 00:04:09.329 "r_mbytes_per_sec": 0, 00:04:09.329 "w_mbytes_per_sec": 0 00:04:09.329 }, 00:04:09.329 "claimed": false, 00:04:09.329 "zoned": false, 00:04:09.329 "supported_io_types": { 00:04:09.329 "read": true, 00:04:09.329 "write": true, 00:04:09.329 "unmap": true, 00:04:09.329 "write_zeroes": true, 00:04:09.329 "flush": true, 00:04:09.329 "reset": true, 00:04:09.329 "compare": false, 00:04:09.329 "compare_and_write": false, 00:04:09.329 "abort": true, 00:04:09.329 "nvme_admin": false, 00:04:09.329 "nvme_io": false 00:04:09.329 }, 00:04:09.329 "memory_domains": [ 00:04:09.329 { 00:04:09.329 "dma_device_id": "system", 00:04:09.329 "dma_device_type": 1 00:04:09.329 }, 00:04:09.329 { 00:04:09.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.330 "dma_device_type": 2 00:04:09.330 } 00:04:09.330 ], 00:04:09.330 "driver_specific": { 00:04:09.330 "passthru": { 00:04:09.330 "name": "Passthru0", 00:04:09.330 "base_bdev_name": "Malloc2" 00:04:09.330 } 00:04:09.330 } 00:04:09.330 } 00:04:09.330 ]' 00:04:09.330 19:34:50 -- rpc/rpc.sh@21 -- # jq length 00:04:09.330 19:34:50 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.330 19:34:50 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.330 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.330 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.330 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.330 19:34:50 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.330 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.330 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.330 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.330 19:34:50 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.330 19:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.330 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.330 19:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.330 19:34:50 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.330 19:34:50 -- rpc/rpc.sh@26 -- # jq length 00:04:09.330 19:34:50 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.330 00:04:09.330 real 0m0.225s 00:04:09.330 user 0m0.151s 00:04:09.330 sys 0m0.021s 00:04:09.330 19:34:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.330 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:04:09.330 ************************************ 00:04:09.330 END TEST rpc_daemon_integrity 00:04:09.330 ************************************ 00:04:09.330 19:34:50 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.330 19:34:50 -- rpc/rpc.sh@84 -- # killprocess 1578177 00:04:09.330 19:34:50 -- common/autotest_common.sh@936 -- # '[' -z 1578177 ']' 00:04:09.330 19:34:50 -- common/autotest_common.sh@940 -- # kill -0 1578177 00:04:09.330 19:34:50 -- common/autotest_common.sh@941 -- # uname 00:04:09.589 19:34:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:09.589 19:34:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1578177 00:04:09.589 19:34:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:09.589 19:34:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:09.589 19:34:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1578177' 00:04:09.589 killing process with pid 1578177 00:04:09.589 19:34:50 -- common/autotest_common.sh@955 -- # kill 1578177 00:04:09.589 19:34:50 -- common/autotest_common.sh@960 -- # wait 1578177 00:04:09.848 00:04:09.848 real 0m2.277s 00:04:09.848 user 0m2.861s 00:04:09.848 sys 0m0.736s 00:04:09.848 19:34:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.848 19:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:09.848 ************************************ 00:04:09.848 END TEST rpc 00:04:09.848 ************************************ 00:04:09.848 19:34:51 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:09.848 19:34:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.848 19:34:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.848 19:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:10.106 ************************************ 00:04:10.106 START TEST skip_rpc 00:04:10.106 ************************************ 00:04:10.106 19:34:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.106 * Looking for test storage... 00:04:10.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:10.106 19:34:51 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.106 19:34:51 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:10.106 19:34:51 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.106 19:34:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.106 19:34:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.106 19:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:10.106 ************************************ 00:04:10.106 START TEST skip_rpc 00:04:10.106 ************************************ 00:04:10.106 19:34:51 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:10.106 19:34:51 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1578667 00:04:10.106 19:34:51 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:10.106 19:34:51 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.106 19:34:51 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:10.365 [2024-04-24 19:34:51.651119] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:10.365 [2024-04-24 19:34:51.651191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578667 ] 00:04:10.365 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.365 [2024-04-24 19:34:51.712863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.365 [2024-04-24 19:34:51.830869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.643 19:34:56 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:15.643 19:34:56 -- common/autotest_common.sh@638 -- # local es=0 00:04:15.643 19:34:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:15.643 19:34:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:15.643 19:34:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:15.643 19:34:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:15.643 19:34:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:15.643 19:34:56 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:15.643 19:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:15.643 19:34:56 -- common/autotest_common.sh@10 -- # set +x 00:04:15.643 19:34:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:15.643 19:34:56 -- common/autotest_common.sh@641 -- # es=1 00:04:15.643 19:34:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:15.643 19:34:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:15.643 19:34:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:15.643 19:34:56 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:15.643 19:34:56 -- rpc/skip_rpc.sh@23 -- # killprocess 1578667 00:04:15.643 19:34:56 -- common/autotest_common.sh@936 -- # '[' -z 1578667 ']' 00:04:15.643 19:34:56 -- common/autotest_common.sh@940 -- # kill -0 1578667 00:04:15.643 19:34:56 -- common/autotest_common.sh@941 -- # uname 00:04:15.643 19:34:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:15.643 19:34:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1578667 00:04:15.643 19:34:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:15.643 19:34:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:15.643 19:34:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1578667' 00:04:15.643 killing process with pid 1578667 00:04:15.643 19:34:56 -- common/autotest_common.sh@955 -- # kill 1578667 00:04:15.643 19:34:56 -- common/autotest_common.sh@960 -- # wait 1578667 00:04:15.643 00:04:15.643 real 0m5.495s 00:04:15.643 user 0m5.173s 00:04:15.643 sys 0m0.325s 00:04:15.643 19:34:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:15.643 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:15.643 ************************************ 00:04:15.643 END TEST skip_rpc 00:04:15.643 ************************************ 00:04:15.643 19:34:57 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:15.643 19:34:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.643 19:34:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.643 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:15.903 ************************************ 00:04:15.903 START TEST skip_rpc_with_json 00:04:15.903 ************************************ 00:04:15.903 19:34:57 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:15.903 19:34:57 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:15.903 19:34:57 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1579364 00:04:15.903 19:34:57 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.903 19:34:57 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.903 19:34:57 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1579364 00:04:15.903 19:34:57 -- common/autotest_common.sh@817 -- # '[' -z 1579364 ']' 00:04:15.903 19:34:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.903 19:34:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:15.903 19:34:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.903 19:34:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:15.903 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:15.903 [2024-04-24 19:34:57.267238] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:15.903 [2024-04-24 19:34:57.267317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579364 ] 00:04:15.903 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.903 [2024-04-24 19:34:57.329144] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.162 [2024-04-24 19:34:57.439113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.422 19:34:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:16.422 19:34:57 -- common/autotest_common.sh@850 -- # return 0 00:04:16.422 19:34:57 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.422 19:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.422 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:16.422 [2024-04-24 19:34:57.704176] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.422 request: 00:04:16.422 { 00:04:16.422 "trtype": "tcp", 00:04:16.422 "method": "nvmf_get_transports", 00:04:16.422 "req_id": 1 00:04:16.422 } 00:04:16.422 Got JSON-RPC error response 00:04:16.422 response: 00:04:16.422 { 00:04:16.422 "code": -19, 00:04:16.422 "message": "No such device" 00:04:16.422 } 00:04:16.422 19:34:57 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:16.422 19:34:57 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.422 19:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.422 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:16.422 [2024-04-24 19:34:57.712286] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.422 19:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.423 19:34:57 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.423 19:34:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.423 19:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:16.423 19:34:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:16.423 19:34:57 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.423 { 00:04:16.423 "subsystems": [ 00:04:16.423 { 00:04:16.423 "subsystem": "vfio_user_target", 00:04:16.423 "config": null 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "keyring", 00:04:16.423 "config": [] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "iobuf", 00:04:16.423 "config": [ 00:04:16.423 { 00:04:16.423 "method": "iobuf_set_options", 00:04:16.423 "params": { 00:04:16.423 "small_pool_count": 8192, 00:04:16.423 "large_pool_count": 1024, 00:04:16.423 "small_bufsize": 8192, 00:04:16.423 "large_bufsize": 135168 00:04:16.423 } 00:04:16.423 } 00:04:16.423 ] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "sock", 00:04:16.423 "config": [ 00:04:16.423 { 00:04:16.423 "method": "sock_impl_set_options", 00:04:16.423 "params": { 00:04:16.423 "impl_name": "posix", 00:04:16.423 "recv_buf_size": 2097152, 00:04:16.423 "send_buf_size": 2097152, 00:04:16.423 "enable_recv_pipe": true, 00:04:16.423 "enable_quickack": false, 00:04:16.423 "enable_placement_id": 0, 00:04:16.423 "enable_zerocopy_send_server": true, 00:04:16.423 "enable_zerocopy_send_client": false, 00:04:16.423 "zerocopy_threshold": 0, 00:04:16.423 "tls_version": 0, 00:04:16.423 "enable_ktls": false 00:04:16.423 } 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "method": "sock_impl_set_options", 00:04:16.423 "params": { 00:04:16.423 "impl_name": "ssl", 00:04:16.423 "recv_buf_size": 4096, 00:04:16.423 "send_buf_size": 4096, 00:04:16.423 "enable_recv_pipe": true, 00:04:16.423 "enable_quickack": false, 00:04:16.423 "enable_placement_id": 0, 00:04:16.423 "enable_zerocopy_send_server": true, 00:04:16.423 "enable_zerocopy_send_client": false, 00:04:16.423 "zerocopy_threshold": 0, 00:04:16.423 "tls_version": 0, 00:04:16.423 "enable_ktls": false 00:04:16.423 } 00:04:16.423 } 00:04:16.423 ] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "vmd", 00:04:16.423 "config": [] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "accel", 00:04:16.423 "config": [ 00:04:16.423 { 00:04:16.423 "method": "accel_set_options", 00:04:16.423 "params": { 00:04:16.423 "small_cache_size": 128, 00:04:16.423 "large_cache_size": 16, 00:04:16.423 "task_count": 2048, 00:04:16.423 "sequence_count": 2048, 00:04:16.423 "buf_count": 2048 00:04:16.423 } 00:04:16.423 } 00:04:16.423 ] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "bdev", 00:04:16.423 "config": [ 00:04:16.423 { 00:04:16.423 "method": "bdev_set_options", 00:04:16.423 "params": { 00:04:16.423 "bdev_io_pool_size": 65535, 00:04:16.423 "bdev_io_cache_size": 256, 00:04:16.423 "bdev_auto_examine": true, 00:04:16.423 "iobuf_small_cache_size": 128, 00:04:16.423 "iobuf_large_cache_size": 16 00:04:16.423 } 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "method": "bdev_raid_set_options", 00:04:16.423 "params": { 00:04:16.423 "process_window_size_kb": 1024 00:04:16.423 } 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "method": "bdev_iscsi_set_options", 00:04:16.423 "params": { 00:04:16.423 "timeout_sec": 30 00:04:16.423 } 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "method": "bdev_nvme_set_options", 00:04:16.423 "params": { 00:04:16.423 "action_on_timeout": "none", 00:04:16.423 "timeout_us": 0, 00:04:16.423 "timeout_admin_us": 0, 00:04:16.423 "keep_alive_timeout_ms": 10000, 00:04:16.423 "arbitration_burst": 0, 00:04:16.423 "low_priority_weight": 0, 00:04:16.423 "medium_priority_weight": 0, 00:04:16.423 "high_priority_weight": 0, 00:04:16.423 "nvme_adminq_poll_period_us": 10000, 00:04:16.423 "nvme_ioq_poll_period_us": 0, 00:04:16.423 "io_queue_requests": 0, 00:04:16.423 "delay_cmd_submit": true, 00:04:16.423 "transport_retry_count": 4, 00:04:16.423 "bdev_retry_count": 3, 00:04:16.423 "transport_ack_timeout": 0, 00:04:16.423 "ctrlr_loss_timeout_sec": 0, 00:04:16.423 "reconnect_delay_sec": 0, 00:04:16.423 "fast_io_fail_timeout_sec": 0, 00:04:16.423 "disable_auto_failback": false, 00:04:16.423 "generate_uuids": false, 00:04:16.423 "transport_tos": 0, 00:04:16.423 "nvme_error_stat": false, 00:04:16.423 "rdma_srq_size": 0, 00:04:16.423 "io_path_stat": false, 00:04:16.423 "allow_accel_sequence": false, 00:04:16.423 "rdma_max_cq_size": 0, 00:04:16.423 "rdma_cm_event_timeout_ms": 0, 00:04:16.423 "dhchap_digests": [ 00:04:16.423 "sha256", 00:04:16.423 "sha384", 00:04:16.423 "sha512" 00:04:16.423 ], 00:04:16.423 "dhchap_dhgroups": [ 00:04:16.423 "null", 00:04:16.423 "ffdhe2048", 00:04:16.423 "ffdhe3072", 00:04:16.423 "ffdhe4096", 00:04:16.423 "ffdhe6144", 00:04:16.423 "ffdhe8192" 00:04:16.423 ] 00:04:16.423 } 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "method": "bdev_nvme_set_hotplug", 00:04:16.423 "params": { 00:04:16.423 "period_us": 100000, 00:04:16.423 "enable": false 00:04:16.423 } 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "method": "bdev_wait_for_examine" 00:04:16.423 } 00:04:16.423 ] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "scsi", 00:04:16.423 "config": null 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "scheduler", 00:04:16.423 "config": [ 00:04:16.423 { 00:04:16.423 "method": "framework_set_scheduler", 00:04:16.423 "params": { 00:04:16.423 "name": "static" 00:04:16.423 } 00:04:16.423 } 00:04:16.423 ] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "vhost_scsi", 00:04:16.423 "config": [] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "vhost_blk", 00:04:16.423 "config": [] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "ublk", 00:04:16.423 "config": [] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "nbd", 00:04:16.423 "config": [] 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "subsystem": "nvmf", 00:04:16.423 "config": [ 00:04:16.423 { 00:04:16.423 "method": "nvmf_set_config", 00:04:16.423 "params": { 00:04:16.423 "discovery_filter": "match_any", 00:04:16.423 "admin_cmd_passthru": { 00:04:16.423 "identify_ctrlr": false 00:04:16.423 } 00:04:16.423 } 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "method": "nvmf_set_max_subsystems", 00:04:16.423 "params": { 00:04:16.423 "max_subsystems": 1024 00:04:16.423 } 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "method": "nvmf_set_crdt", 00:04:16.423 "params": { 00:04:16.423 "crdt1": 0, 00:04:16.423 "crdt2": 0, 00:04:16.423 "crdt3": 0 00:04:16.423 } 00:04:16.423 }, 00:04:16.423 { 00:04:16.423 "method": "nvmf_create_transport", 00:04:16.423 "params": { 00:04:16.423 "trtype": "TCP", 00:04:16.423 "max_queue_depth": 128, 00:04:16.423 "max_io_qpairs_per_ctrlr": 127, 00:04:16.423 "in_capsule_data_size": 4096, 00:04:16.423 "max_io_size": 131072, 00:04:16.423 "io_unit_size": 131072, 00:04:16.423 "max_aq_depth": 128, 00:04:16.423 "num_shared_buffers": 511, 00:04:16.423 "buf_cache_size": 4294967295, 00:04:16.424 "dif_insert_or_strip": false, 00:04:16.424 "zcopy": false, 00:04:16.424 "c2h_success": true, 00:04:16.424 "sock_priority": 0, 00:04:16.424 "abort_timeout_sec": 1, 00:04:16.424 "ack_timeout": 0, 00:04:16.424 "data_wr_pool_size": 0 00:04:16.424 } 00:04:16.424 } 00:04:16.424 ] 00:04:16.424 }, 00:04:16.424 { 00:04:16.424 "subsystem": "iscsi", 00:04:16.424 "config": [ 00:04:16.424 { 00:04:16.424 "method": "iscsi_set_options", 00:04:16.424 "params": { 00:04:16.424 "node_base": "iqn.2016-06.io.spdk", 00:04:16.424 "max_sessions": 128, 00:04:16.424 "max_connections_per_session": 2, 00:04:16.424 "max_queue_depth": 64, 00:04:16.424 "default_time2wait": 2, 00:04:16.424 "default_time2retain": 20, 00:04:16.424 "first_burst_length": 8192, 00:04:16.424 "immediate_data": true, 00:04:16.424 "allow_duplicated_isid": false, 00:04:16.424 "error_recovery_level": 0, 00:04:16.424 "nop_timeout": 60, 00:04:16.424 "nop_in_interval": 30, 00:04:16.424 "disable_chap": false, 00:04:16.424 "require_chap": false, 00:04:16.424 "mutual_chap": false, 00:04:16.424 "chap_group": 0, 00:04:16.424 "max_large_datain_per_connection": 64, 00:04:16.424 "max_r2t_per_connection": 4, 00:04:16.424 "pdu_pool_size": 36864, 00:04:16.424 "immediate_data_pool_size": 16384, 00:04:16.424 "data_out_pool_size": 2048 00:04:16.424 } 00:04:16.424 } 00:04:16.424 ] 00:04:16.424 } 00:04:16.424 ] 00:04:16.424 } 00:04:16.424 19:34:57 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.424 19:34:57 -- rpc/skip_rpc.sh@40 -- # killprocess 1579364 00:04:16.424 19:34:57 -- common/autotest_common.sh@936 -- # '[' -z 1579364 ']' 00:04:16.424 19:34:57 -- common/autotest_common.sh@940 -- # kill -0 1579364 00:04:16.424 19:34:57 -- common/autotest_common.sh@941 -- # uname 00:04:16.424 19:34:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:16.424 19:34:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1579364 00:04:16.424 19:34:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:16.424 19:34:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:16.424 19:34:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1579364' 00:04:16.424 killing process with pid 1579364 00:04:16.424 19:34:57 -- common/autotest_common.sh@955 -- # kill 1579364 00:04:16.424 19:34:57 -- common/autotest_common.sh@960 -- # wait 1579364 00:04:16.995 19:34:58 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1579502 00:04:16.995 19:34:58 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.995 19:34:58 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:22.276 19:35:03 -- rpc/skip_rpc.sh@50 -- # killprocess 1579502 00:04:22.276 19:35:03 -- common/autotest_common.sh@936 -- # '[' -z 1579502 ']' 00:04:22.276 19:35:03 -- common/autotest_common.sh@940 -- # kill -0 1579502 00:04:22.276 19:35:03 -- common/autotest_common.sh@941 -- # uname 00:04:22.276 19:35:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:22.276 19:35:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1579502 00:04:22.276 19:35:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:22.276 19:35:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:22.277 19:35:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1579502' 00:04:22.277 killing process with pid 1579502 00:04:22.277 19:35:03 -- common/autotest_common.sh@955 -- # kill 1579502 00:04:22.277 19:35:03 -- common/autotest_common.sh@960 -- # wait 1579502 00:04:22.536 19:35:03 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:22.536 19:35:03 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:22.537 00:04:22.537 real 0m6.619s 00:04:22.537 user 0m6.197s 00:04:22.537 sys 0m0.708s 00:04:22.537 19:35:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.537 19:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:22.537 ************************************ 00:04:22.537 END TEST skip_rpc_with_json 00:04:22.537 ************************************ 00:04:22.537 19:35:03 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:22.537 19:35:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.537 19:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.537 19:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:22.537 ************************************ 00:04:22.537 START TEST skip_rpc_with_delay 00:04:22.537 ************************************ 00:04:22.537 19:35:03 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:22.537 19:35:03 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.537 19:35:03 -- common/autotest_common.sh@638 -- # local es=0 00:04:22.537 19:35:03 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.537 19:35:03 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.537 19:35:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:22.537 19:35:03 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.537 19:35:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:22.537 19:35:03 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.537 19:35:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:22.537 19:35:03 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.537 19:35:03 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:22.537 19:35:03 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.537 [2024-04-24 19:35:04.013052] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.537 [2024-04-24 19:35:04.013180] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:22.537 19:35:04 -- common/autotest_common.sh@641 -- # es=1 00:04:22.537 19:35:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:22.537 19:35:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:22.537 19:35:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:22.537 00:04:22.537 real 0m0.066s 00:04:22.537 user 0m0.042s 00:04:22.537 sys 0m0.023s 00:04:22.537 19:35:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.537 19:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:22.537 ************************************ 00:04:22.537 END TEST skip_rpc_with_delay 00:04:22.537 ************************************ 00:04:22.537 19:35:04 -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.796 19:35:04 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.796 19:35:04 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.796 19:35:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.796 19:35:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.796 19:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:22.796 ************************************ 00:04:22.796 START TEST exit_on_failed_rpc_init 00:04:22.796 ************************************ 00:04:22.796 19:35:04 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:22.796 19:35:04 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1580239 00:04:22.796 19:35:04 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.796 19:35:04 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1580239 00:04:22.796 19:35:04 -- common/autotest_common.sh@817 -- # '[' -z 1580239 ']' 00:04:22.796 19:35:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.796 19:35:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:22.796 19:35:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.796 19:35:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:22.796 19:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:22.796 [2024-04-24 19:35:04.195387] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:22.796 [2024-04-24 19:35:04.195464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580239 ] 00:04:22.796 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.796 [2024-04-24 19:35:04.258225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.054 [2024-04-24 19:35:04.373548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.624 19:35:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:23.624 19:35:05 -- common/autotest_common.sh@850 -- # return 0 00:04:23.624 19:35:05 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.624 19:35:05 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.624 19:35:05 -- common/autotest_common.sh@638 -- # local es=0 00:04:23.624 19:35:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.625 19:35:05 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.625 19:35:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.625 19:35:05 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.625 19:35:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.625 19:35:05 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.625 19:35:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.625 19:35:05 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.625 19:35:05 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:23.625 19:35:05 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.885 [2024-04-24 19:35:05.167316] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:23.885 [2024-04-24 19:35:05.167402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580368 ] 00:04:23.885 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.885 [2024-04-24 19:35:05.229338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.885 [2024-04-24 19:35:05.347546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.885 [2024-04-24 19:35:05.347705] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.885 [2024-04-24 19:35:05.347726] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.885 [2024-04-24 19:35:05.347738] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:24.144 19:35:05 -- common/autotest_common.sh@641 -- # es=234 00:04:24.144 19:35:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:24.144 19:35:05 -- common/autotest_common.sh@650 -- # es=106 00:04:24.144 19:35:05 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:24.144 19:35:05 -- common/autotest_common.sh@658 -- # es=1 00:04:24.144 19:35:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:24.144 19:35:05 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:24.144 19:35:05 -- rpc/skip_rpc.sh@70 -- # killprocess 1580239 00:04:24.144 19:35:05 -- common/autotest_common.sh@936 -- # '[' -z 1580239 ']' 00:04:24.144 19:35:05 -- common/autotest_common.sh@940 -- # kill -0 1580239 00:04:24.144 19:35:05 -- common/autotest_common.sh@941 -- # uname 00:04:24.144 19:35:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:24.144 19:35:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1580239 00:04:24.144 19:35:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:24.144 19:35:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:24.144 19:35:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1580239' 00:04:24.144 killing process with pid 1580239 00:04:24.144 19:35:05 -- common/autotest_common.sh@955 -- # kill 1580239 00:04:24.144 19:35:05 -- common/autotest_common.sh@960 -- # wait 1580239 00:04:24.712 00:04:24.712 real 0m1.815s 00:04:24.712 user 0m2.170s 00:04:24.712 sys 0m0.480s 00:04:24.712 19:35:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.712 19:35:05 -- common/autotest_common.sh@10 -- # set +x 00:04:24.712 ************************************ 00:04:24.712 END TEST exit_on_failed_rpc_init 00:04:24.712 ************************************ 00:04:24.712 19:35:05 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:24.712 00:04:24.712 real 0m14.525s 00:04:24.712 user 0m13.798s 00:04:24.712 sys 0m1.824s 00:04:24.712 19:35:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.712 19:35:05 -- common/autotest_common.sh@10 -- # set +x 00:04:24.712 ************************************ 00:04:24.712 END TEST skip_rpc 00:04:24.712 ************************************ 00:04:24.712 19:35:06 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.712 19:35:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.712 19:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.712 19:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:24.712 ************************************ 00:04:24.712 START TEST rpc_client 00:04:24.712 ************************************ 00:04:24.712 19:35:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.712 * Looking for test storage... 00:04:24.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:24.712 19:35:06 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:24.712 OK 00:04:24.712 19:35:06 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.712 00:04:24.712 real 0m0.065s 00:04:24.712 user 0m0.031s 00:04:24.712 sys 0m0.040s 00:04:24.712 19:35:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.712 19:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:24.712 ************************************ 00:04:24.712 END TEST rpc_client 00:04:24.712 ************************************ 00:04:24.712 19:35:06 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.712 19:35:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.712 19:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.712 19:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:24.971 ************************************ 00:04:24.971 START TEST json_config 00:04:24.971 ************************************ 00:04:24.971 19:35:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.971 19:35:06 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.971 19:35:06 -- nvmf/common.sh@7 -- # uname -s 00:04:24.971 19:35:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.971 19:35:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.971 19:35:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.971 19:35:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.971 19:35:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.971 19:35:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.971 19:35:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.971 19:35:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.971 19:35:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.971 19:35:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.971 19:35:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:24.971 19:35:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:24.971 19:35:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.971 19:35:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.971 19:35:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.971 19:35:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.972 19:35:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.972 19:35:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.972 19:35:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.972 19:35:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.972 19:35:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.972 19:35:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.972 19:35:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.972 19:35:06 -- paths/export.sh@5 -- # export PATH 00:04:24.972 19:35:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.972 19:35:06 -- nvmf/common.sh@47 -- # : 0 00:04:24.972 19:35:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:24.972 19:35:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:24.972 19:35:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.972 19:35:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.972 19:35:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.972 19:35:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:24.972 19:35:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:24.972 19:35:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:24.972 19:35:06 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:24.972 19:35:06 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.972 19:35:06 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.972 19:35:06 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.972 19:35:06 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.972 19:35:06 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.972 19:35:06 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.972 19:35:06 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.972 19:35:06 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.972 19:35:06 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.972 19:35:06 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.972 19:35:06 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:24.972 19:35:06 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.972 19:35:06 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.972 19:35:06 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.972 19:35:06 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:24.972 INFO: JSON configuration test init 00:04:24.972 19:35:06 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:24.972 19:35:06 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:24.972 19:35:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:24.972 19:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:24.972 19:35:06 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:24.972 19:35:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:24.972 19:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:24.972 19:35:06 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.972 19:35:06 -- json_config/common.sh@9 -- # local app=target 00:04:24.972 19:35:06 -- json_config/common.sh@10 -- # shift 00:04:24.972 19:35:06 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.972 19:35:06 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.972 19:35:06 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.972 19:35:06 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.972 19:35:06 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.972 19:35:06 -- json_config/common.sh@22 -- # app_pid["$app"]=1580632 00:04:24.972 19:35:06 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.972 19:35:06 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.972 Waiting for target to run... 00:04:24.972 19:35:06 -- json_config/common.sh@25 -- # waitforlisten 1580632 /var/tmp/spdk_tgt.sock 00:04:24.972 19:35:06 -- common/autotest_common.sh@817 -- # '[' -z 1580632 ']' 00:04:24.972 19:35:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.972 19:35:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:24.972 19:35:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.972 19:35:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:24.972 19:35:06 -- common/autotest_common.sh@10 -- # set +x 00:04:24.972 [2024-04-24 19:35:06.404198] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:24.972 [2024-04-24 19:35:06.404289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1580632 ] 00:04:24.972 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.540 [2024-04-24 19:35:06.899663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.540 [2024-04-24 19:35:07.006791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.105 19:35:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:26.105 19:35:07 -- common/autotest_common.sh@850 -- # return 0 00:04:26.105 19:35:07 -- json_config/common.sh@26 -- # echo '' 00:04:26.105 00:04:26.105 19:35:07 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:26.105 19:35:07 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:26.105 19:35:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:26.105 19:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:26.105 19:35:07 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:26.105 19:35:07 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:26.105 19:35:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:26.105 19:35:07 -- common/autotest_common.sh@10 -- # set +x 00:04:26.105 19:35:07 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:26.105 19:35:07 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:26.105 19:35:07 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.394 19:35:10 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:29.394 19:35:10 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:29.394 19:35:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:29.394 19:35:10 -- common/autotest_common.sh@10 -- # set +x 00:04:29.394 19:35:10 -- json_config/json_config.sh@45 -- # local ret=0 00:04:29.394 19:35:10 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:29.394 19:35:10 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:29.394 19:35:10 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:29.394 19:35:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:29.394 19:35:10 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:29.394 19:35:10 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:29.394 19:35:10 -- json_config/json_config.sh@48 -- # local get_types 00:04:29.394 19:35:10 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:29.394 19:35:10 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:29.394 19:35:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:29.394 19:35:10 -- common/autotest_common.sh@10 -- # set +x 00:04:29.395 19:35:10 -- json_config/json_config.sh@55 -- # return 0 00:04:29.395 19:35:10 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:29.395 19:35:10 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:29.395 19:35:10 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:29.395 19:35:10 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:29.395 19:35:10 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:29.395 19:35:10 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:29.395 19:35:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:29.395 19:35:10 -- common/autotest_common.sh@10 -- # set +x 00:04:29.395 19:35:10 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:29.395 19:35:10 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:29.395 19:35:10 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:29.395 19:35:10 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.395 19:35:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.653 MallocForNvmf0 00:04:29.653 19:35:11 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.653 19:35:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.912 MallocForNvmf1 00:04:29.912 19:35:11 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:29.912 19:35:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.170 [2024-04-24 19:35:11.549211] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.171 19:35:11 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.171 19:35:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.428 19:35:11 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.428 19:35:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.686 19:35:12 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.686 19:35:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.945 19:35:12 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:30.945 19:35:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.202 [2024-04-24 19:35:12.528373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.202 19:35:12 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:31.202 19:35:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:31.202 19:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:31.202 19:35:12 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:31.202 19:35:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:31.202 19:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:31.202 19:35:12 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:31.202 19:35:12 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.202 19:35:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.461 MallocBdevForConfigChangeCheck 00:04:31.461 19:35:12 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:31.461 19:35:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:31.461 19:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:31.461 19:35:12 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:31.461 19:35:12 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.719 19:35:13 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:31.719 INFO: shutting down applications... 00:04:31.719 19:35:13 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:31.719 19:35:13 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:31.719 19:35:13 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:31.719 19:35:13 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:33.631 Calling clear_iscsi_subsystem 00:04:33.631 Calling clear_nvmf_subsystem 00:04:33.631 Calling clear_nbd_subsystem 00:04:33.631 Calling clear_ublk_subsystem 00:04:33.631 Calling clear_vhost_blk_subsystem 00:04:33.631 Calling clear_vhost_scsi_subsystem 00:04:33.631 Calling clear_bdev_subsystem 00:04:33.631 19:35:14 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:33.631 19:35:14 -- json_config/json_config.sh@343 -- # count=100 00:04:33.631 19:35:14 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:33.631 19:35:14 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.631 19:35:14 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:33.631 19:35:14 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:33.892 19:35:15 -- json_config/json_config.sh@345 -- # break 00:04:33.892 19:35:15 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:33.892 19:35:15 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:33.892 19:35:15 -- json_config/common.sh@31 -- # local app=target 00:04:33.892 19:35:15 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.892 19:35:15 -- json_config/common.sh@35 -- # [[ -n 1580632 ]] 00:04:33.892 19:35:15 -- json_config/common.sh@38 -- # kill -SIGINT 1580632 00:04:33.892 19:35:15 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.892 19:35:15 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.892 19:35:15 -- json_config/common.sh@41 -- # kill -0 1580632 00:04:33.892 19:35:15 -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.461 19:35:15 -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.461 19:35:15 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.461 19:35:15 -- json_config/common.sh@41 -- # kill -0 1580632 00:04:34.461 19:35:15 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.461 19:35:15 -- json_config/common.sh@43 -- # break 00:04:34.461 19:35:15 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.461 19:35:15 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.461 SPDK target shutdown done 00:04:34.462 19:35:15 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:34.462 INFO: relaunching applications... 00:04:34.462 19:35:15 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.462 19:35:15 -- json_config/common.sh@9 -- # local app=target 00:04:34.462 19:35:15 -- json_config/common.sh@10 -- # shift 00:04:34.462 19:35:15 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.462 19:35:15 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.462 19:35:15 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.462 19:35:15 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.462 19:35:15 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.462 19:35:15 -- json_config/common.sh@22 -- # app_pid["$app"]=1581900 00:04:34.462 19:35:15 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.462 19:35:15 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.462 Waiting for target to run... 00:04:34.462 19:35:15 -- json_config/common.sh@25 -- # waitforlisten 1581900 /var/tmp/spdk_tgt.sock 00:04:34.462 19:35:15 -- common/autotest_common.sh@817 -- # '[' -z 1581900 ']' 00:04:34.462 19:35:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.462 19:35:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:34.462 19:35:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.462 19:35:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:34.462 19:35:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.462 [2024-04-24 19:35:15.818474] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:34.462 [2024-04-24 19:35:15.818574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581900 ] 00:04:34.462 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.027 [2024-04-24 19:35:16.345045] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.027 [2024-04-24 19:35:16.452303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.315 [2024-04-24 19:35:19.482579] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.315 [2024-04-24 19:35:19.515026] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.902 19:35:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:38.902 19:35:20 -- common/autotest_common.sh@850 -- # return 0 00:04:38.902 19:35:20 -- json_config/common.sh@26 -- # echo '' 00:04:38.902 00:04:38.902 19:35:20 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:38.902 19:35:20 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:38.902 INFO: Checking if target configuration is the same... 00:04:38.902 19:35:20 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.902 19:35:20 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:38.902 19:35:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.902 + '[' 2 -ne 2 ']' 00:04:38.902 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:38.902 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:38.902 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:38.902 +++ basename /dev/fd/62 00:04:38.902 ++ mktemp /tmp/62.XXX 00:04:38.902 + tmp_file_1=/tmp/62.een 00:04:38.902 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.902 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.902 + tmp_file_2=/tmp/spdk_tgt_config.json.uID 00:04:38.902 + ret=0 00:04:38.902 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.159 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.159 + diff -u /tmp/62.een /tmp/spdk_tgt_config.json.uID 00:04:39.159 + echo 'INFO: JSON config files are the same' 00:04:39.159 INFO: JSON config files are the same 00:04:39.159 + rm /tmp/62.een /tmp/spdk_tgt_config.json.uID 00:04:39.159 + exit 0 00:04:39.159 19:35:20 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:39.159 19:35:20 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:39.159 INFO: changing configuration and checking if this can be detected... 00:04:39.159 19:35:20 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.159 19:35:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:39.417 19:35:20 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.417 19:35:20 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:39.417 19:35:20 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.417 + '[' 2 -ne 2 ']' 00:04:39.417 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:39.417 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:39.417 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.417 +++ basename /dev/fd/62 00:04:39.417 ++ mktemp /tmp/62.XXX 00:04:39.417 + tmp_file_1=/tmp/62.219 00:04:39.417 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.417 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:39.417 + tmp_file_2=/tmp/spdk_tgt_config.json.iQW 00:04:39.417 + ret=0 00:04:39.417 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.986 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.986 + diff -u /tmp/62.219 /tmp/spdk_tgt_config.json.iQW 00:04:39.986 + ret=1 00:04:39.986 + echo '=== Start of file: /tmp/62.219 ===' 00:04:39.986 + cat /tmp/62.219 00:04:39.986 + echo '=== End of file: /tmp/62.219 ===' 00:04:39.986 + echo '' 00:04:39.986 + echo '=== Start of file: /tmp/spdk_tgt_config.json.iQW ===' 00:04:39.986 + cat /tmp/spdk_tgt_config.json.iQW 00:04:39.986 + echo '=== End of file: /tmp/spdk_tgt_config.json.iQW ===' 00:04:39.986 + echo '' 00:04:39.986 + rm /tmp/62.219 /tmp/spdk_tgt_config.json.iQW 00:04:39.986 + exit 1 00:04:39.986 19:35:21 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:39.986 INFO: configuration change detected. 00:04:39.986 19:35:21 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:39.986 19:35:21 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:39.986 19:35:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:39.986 19:35:21 -- common/autotest_common.sh@10 -- # set +x 00:04:39.986 19:35:21 -- json_config/json_config.sh@307 -- # local ret=0 00:04:39.986 19:35:21 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:39.986 19:35:21 -- json_config/json_config.sh@317 -- # [[ -n 1581900 ]] 00:04:39.986 19:35:21 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:39.986 19:35:21 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:39.986 19:35:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:39.986 19:35:21 -- common/autotest_common.sh@10 -- # set +x 00:04:39.986 19:35:21 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:39.986 19:35:21 -- json_config/json_config.sh@193 -- # uname -s 00:04:39.986 19:35:21 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:39.986 19:35:21 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:39.986 19:35:21 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:39.986 19:35:21 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:39.986 19:35:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:39.986 19:35:21 -- common/autotest_common.sh@10 -- # set +x 00:04:39.986 19:35:21 -- json_config/json_config.sh@323 -- # killprocess 1581900 00:04:39.986 19:35:21 -- common/autotest_common.sh@936 -- # '[' -z 1581900 ']' 00:04:39.986 19:35:21 -- common/autotest_common.sh@940 -- # kill -0 1581900 00:04:39.986 19:35:21 -- common/autotest_common.sh@941 -- # uname 00:04:39.986 19:35:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:39.986 19:35:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1581900 00:04:39.986 19:35:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:39.986 19:35:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:39.986 19:35:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1581900' 00:04:39.986 killing process with pid 1581900 00:04:39.986 19:35:21 -- common/autotest_common.sh@955 -- # kill 1581900 00:04:39.986 19:35:21 -- common/autotest_common.sh@960 -- # wait 1581900 00:04:41.894 19:35:23 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.894 19:35:23 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:41.894 19:35:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:41.894 19:35:23 -- common/autotest_common.sh@10 -- # set +x 00:04:41.894 19:35:23 -- json_config/json_config.sh@328 -- # return 0 00:04:41.894 19:35:23 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:41.894 INFO: Success 00:04:41.894 00:04:41.894 real 0m16.773s 00:04:41.894 user 0m18.523s 00:04:41.894 sys 0m2.278s 00:04:41.894 19:35:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.894 19:35:23 -- common/autotest_common.sh@10 -- # set +x 00:04:41.894 ************************************ 00:04:41.894 END TEST json_config 00:04:41.894 ************************************ 00:04:41.894 19:35:23 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.894 19:35:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.894 19:35:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.894 19:35:23 -- common/autotest_common.sh@10 -- # set +x 00:04:41.894 ************************************ 00:04:41.894 START TEST json_config_extra_key 00:04:41.894 ************************************ 00:04:41.894 19:35:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.894 19:35:23 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.894 19:35:23 -- nvmf/common.sh@7 -- # uname -s 00:04:41.894 19:35:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.894 19:35:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.894 19:35:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.894 19:35:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.894 19:35:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.894 19:35:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.894 19:35:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.894 19:35:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.894 19:35:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.894 19:35:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.894 19:35:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:41.894 19:35:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:41.894 19:35:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.894 19:35:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.894 19:35:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.894 19:35:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.894 19:35:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.895 19:35:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.895 19:35:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.895 19:35:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.895 19:35:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.895 19:35:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.895 19:35:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.895 19:35:23 -- paths/export.sh@5 -- # export PATH 00:04:41.895 19:35:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.895 19:35:23 -- nvmf/common.sh@47 -- # : 0 00:04:41.895 19:35:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.895 19:35:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.895 19:35:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.895 19:35:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.895 19:35:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.895 19:35:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.895 19:35:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.895 19:35:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.895 INFO: launching applications... 00:04:41.895 19:35:23 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.895 19:35:23 -- json_config/common.sh@9 -- # local app=target 00:04:41.895 19:35:23 -- json_config/common.sh@10 -- # shift 00:04:41.895 19:35:23 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.895 19:35:23 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.895 19:35:23 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.895 19:35:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.895 19:35:23 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.895 19:35:23 -- json_config/common.sh@22 -- # app_pid["$app"]=1582869 00:04:41.895 19:35:23 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.895 19:35:23 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.895 Waiting for target to run... 00:04:41.895 19:35:23 -- json_config/common.sh@25 -- # waitforlisten 1582869 /var/tmp/spdk_tgt.sock 00:04:41.895 19:35:23 -- common/autotest_common.sh@817 -- # '[' -z 1582869 ']' 00:04:41.895 19:35:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.895 19:35:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:41.895 19:35:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.895 19:35:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:41.895 19:35:23 -- common/autotest_common.sh@10 -- # set +x 00:04:41.895 [2024-04-24 19:35:23.293801] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:41.895 [2024-04-24 19:35:23.293884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582869 ] 00:04:41.895 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.153 [2024-04-24 19:35:23.643737] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.412 [2024-04-24 19:35:23.735349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.981 19:35:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:42.982 19:35:24 -- common/autotest_common.sh@850 -- # return 0 00:04:42.982 19:35:24 -- json_config/common.sh@26 -- # echo '' 00:04:42.982 00:04:42.982 19:35:24 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.982 INFO: shutting down applications... 00:04:42.982 19:35:24 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.982 19:35:24 -- json_config/common.sh@31 -- # local app=target 00:04:42.982 19:35:24 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.982 19:35:24 -- json_config/common.sh@35 -- # [[ -n 1582869 ]] 00:04:42.982 19:35:24 -- json_config/common.sh@38 -- # kill -SIGINT 1582869 00:04:42.982 19:35:24 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.982 19:35:24 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.982 19:35:24 -- json_config/common.sh@41 -- # kill -0 1582869 00:04:42.982 19:35:24 -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.241 19:35:24 -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.241 19:35:24 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.241 19:35:24 -- json_config/common.sh@41 -- # kill -0 1582869 00:04:43.241 19:35:24 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.241 19:35:24 -- json_config/common.sh@43 -- # break 00:04:43.241 19:35:24 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.241 19:35:24 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.241 SPDK target shutdown done 00:04:43.241 19:35:24 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.241 Success 00:04:43.241 00:04:43.241 real 0m1.559s 00:04:43.241 user 0m1.573s 00:04:43.241 sys 0m0.432s 00:04:43.241 19:35:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:43.241 19:35:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.241 ************************************ 00:04:43.241 END TEST json_config_extra_key 00:04:43.241 ************************************ 00:04:43.500 19:35:24 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.500 19:35:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.500 19:35:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.500 19:35:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.500 ************************************ 00:04:43.500 START TEST alias_rpc 00:04:43.500 ************************************ 00:04:43.500 19:35:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.500 * Looking for test storage... 00:04:43.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:43.500 19:35:24 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.500 19:35:24 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1583192 00:04:43.500 19:35:24 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.500 19:35:24 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1583192 00:04:43.500 19:35:24 -- common/autotest_common.sh@817 -- # '[' -z 1583192 ']' 00:04:43.500 19:35:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.500 19:35:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:43.500 19:35:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.500 19:35:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:43.500 19:35:24 -- common/autotest_common.sh@10 -- # set +x 00:04:43.500 [2024-04-24 19:35:24.965693] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:43.500 [2024-04-24 19:35:24.965771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583192 ] 00:04:43.500 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.759 [2024-04-24 19:35:25.022420] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.759 [2024-04-24 19:35:25.127538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.019 19:35:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:44.019 19:35:25 -- common/autotest_common.sh@850 -- # return 0 00:04:44.019 19:35:25 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:44.279 19:35:25 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1583192 00:04:44.279 19:35:25 -- common/autotest_common.sh@936 -- # '[' -z 1583192 ']' 00:04:44.279 19:35:25 -- common/autotest_common.sh@940 -- # kill -0 1583192 00:04:44.279 19:35:25 -- common/autotest_common.sh@941 -- # uname 00:04:44.279 19:35:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.279 19:35:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1583192 00:04:44.279 19:35:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.279 19:35:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.279 19:35:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1583192' 00:04:44.279 killing process with pid 1583192 00:04:44.279 19:35:25 -- common/autotest_common.sh@955 -- # kill 1583192 00:04:44.279 19:35:25 -- common/autotest_common.sh@960 -- # wait 1583192 00:04:44.848 00:04:44.848 real 0m1.262s 00:04:44.848 user 0m1.339s 00:04:44.848 sys 0m0.417s 00:04:44.848 19:35:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.848 19:35:26 -- common/autotest_common.sh@10 -- # set +x 00:04:44.848 ************************************ 00:04:44.848 END TEST alias_rpc 00:04:44.848 ************************************ 00:04:44.848 19:35:26 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:44.848 19:35:26 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.848 19:35:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.848 19:35:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.848 19:35:26 -- common/autotest_common.sh@10 -- # set +x 00:04:44.848 ************************************ 00:04:44.848 START TEST spdkcli_tcp 00:04:44.848 ************************************ 00:04:44.848 19:35:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.848 * Looking for test storage... 00:04:44.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:44.848 19:35:26 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:44.848 19:35:26 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:44.848 19:35:26 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:44.848 19:35:26 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:44.848 19:35:26 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:44.848 19:35:26 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:44.848 19:35:26 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:44.848 19:35:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:44.848 19:35:26 -- common/autotest_common.sh@10 -- # set +x 00:04:44.848 19:35:26 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1583386 00:04:44.848 19:35:26 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:44.848 19:35:26 -- spdkcli/tcp.sh@27 -- # waitforlisten 1583386 00:04:44.848 19:35:26 -- common/autotest_common.sh@817 -- # '[' -z 1583386 ']' 00:04:44.848 19:35:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.848 19:35:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.848 19:35:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.848 19:35:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.848 19:35:26 -- common/autotest_common.sh@10 -- # set +x 00:04:44.848 [2024-04-24 19:35:26.358882] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:44.848 [2024-04-24 19:35:26.358982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583386 ] 00:04:45.108 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.108 [2024-04-24 19:35:26.421480] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.108 [2024-04-24 19:35:26.527605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.108 [2024-04-24 19:35:26.527610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.043 19:35:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:46.043 19:35:27 -- common/autotest_common.sh@850 -- # return 0 00:04:46.043 19:35:27 -- spdkcli/tcp.sh@31 -- # socat_pid=1583523 00:04:46.043 19:35:27 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:46.043 19:35:27 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:46.043 [ 00:04:46.043 "bdev_malloc_delete", 00:04:46.043 "bdev_malloc_create", 00:04:46.043 "bdev_null_resize", 00:04:46.043 "bdev_null_delete", 00:04:46.043 "bdev_null_create", 00:04:46.043 "bdev_nvme_cuse_unregister", 00:04:46.043 "bdev_nvme_cuse_register", 00:04:46.043 "bdev_opal_new_user", 00:04:46.043 "bdev_opal_set_lock_state", 00:04:46.043 "bdev_opal_delete", 00:04:46.043 "bdev_opal_get_info", 00:04:46.043 "bdev_opal_create", 00:04:46.043 "bdev_nvme_opal_revert", 00:04:46.043 "bdev_nvme_opal_init", 00:04:46.043 "bdev_nvme_send_cmd", 00:04:46.043 "bdev_nvme_get_path_iostat", 00:04:46.043 "bdev_nvme_get_mdns_discovery_info", 00:04:46.043 "bdev_nvme_stop_mdns_discovery", 00:04:46.043 "bdev_nvme_start_mdns_discovery", 00:04:46.043 "bdev_nvme_set_multipath_policy", 00:04:46.043 "bdev_nvme_set_preferred_path", 00:04:46.043 "bdev_nvme_get_io_paths", 00:04:46.043 "bdev_nvme_remove_error_injection", 00:04:46.043 "bdev_nvme_add_error_injection", 00:04:46.043 "bdev_nvme_get_discovery_info", 00:04:46.043 "bdev_nvme_stop_discovery", 00:04:46.043 "bdev_nvme_start_discovery", 00:04:46.043 "bdev_nvme_get_controller_health_info", 00:04:46.043 "bdev_nvme_disable_controller", 00:04:46.043 "bdev_nvme_enable_controller", 00:04:46.043 "bdev_nvme_reset_controller", 00:04:46.043 "bdev_nvme_get_transport_statistics", 00:04:46.043 "bdev_nvme_apply_firmware", 00:04:46.043 "bdev_nvme_detach_controller", 00:04:46.043 "bdev_nvme_get_controllers", 00:04:46.043 "bdev_nvme_attach_controller", 00:04:46.043 "bdev_nvme_set_hotplug", 00:04:46.043 "bdev_nvme_set_options", 00:04:46.043 "bdev_passthru_delete", 00:04:46.043 "bdev_passthru_create", 00:04:46.043 "bdev_lvol_grow_lvstore", 00:04:46.043 "bdev_lvol_get_lvols", 00:04:46.043 "bdev_lvol_get_lvstores", 00:04:46.043 "bdev_lvol_delete", 00:04:46.043 "bdev_lvol_set_read_only", 00:04:46.043 "bdev_lvol_resize", 00:04:46.043 "bdev_lvol_decouple_parent", 00:04:46.043 "bdev_lvol_inflate", 00:04:46.043 "bdev_lvol_rename", 00:04:46.043 "bdev_lvol_clone_bdev", 00:04:46.043 "bdev_lvol_clone", 00:04:46.043 "bdev_lvol_snapshot", 00:04:46.043 "bdev_lvol_create", 00:04:46.043 "bdev_lvol_delete_lvstore", 00:04:46.043 "bdev_lvol_rename_lvstore", 00:04:46.043 "bdev_lvol_create_lvstore", 00:04:46.043 "bdev_raid_set_options", 00:04:46.043 "bdev_raid_remove_base_bdev", 00:04:46.043 "bdev_raid_add_base_bdev", 00:04:46.043 "bdev_raid_delete", 00:04:46.043 "bdev_raid_create", 00:04:46.043 "bdev_raid_get_bdevs", 00:04:46.043 "bdev_error_inject_error", 00:04:46.043 "bdev_error_delete", 00:04:46.043 "bdev_error_create", 00:04:46.043 "bdev_split_delete", 00:04:46.043 "bdev_split_create", 00:04:46.043 "bdev_delay_delete", 00:04:46.043 "bdev_delay_create", 00:04:46.043 "bdev_delay_update_latency", 00:04:46.043 "bdev_zone_block_delete", 00:04:46.043 "bdev_zone_block_create", 00:04:46.043 "blobfs_create", 00:04:46.044 "blobfs_detect", 00:04:46.044 "blobfs_set_cache_size", 00:04:46.044 "bdev_aio_delete", 00:04:46.044 "bdev_aio_rescan", 00:04:46.044 "bdev_aio_create", 00:04:46.044 "bdev_ftl_set_property", 00:04:46.044 "bdev_ftl_get_properties", 00:04:46.044 "bdev_ftl_get_stats", 00:04:46.044 "bdev_ftl_unmap", 00:04:46.044 "bdev_ftl_unload", 00:04:46.044 "bdev_ftl_delete", 00:04:46.044 "bdev_ftl_load", 00:04:46.044 "bdev_ftl_create", 00:04:46.044 "bdev_virtio_attach_controller", 00:04:46.044 "bdev_virtio_scsi_get_devices", 00:04:46.044 "bdev_virtio_detach_controller", 00:04:46.044 "bdev_virtio_blk_set_hotplug", 00:04:46.044 "bdev_iscsi_delete", 00:04:46.044 "bdev_iscsi_create", 00:04:46.044 "bdev_iscsi_set_options", 00:04:46.044 "accel_error_inject_error", 00:04:46.044 "ioat_scan_accel_module", 00:04:46.044 "dsa_scan_accel_module", 00:04:46.044 "iaa_scan_accel_module", 00:04:46.044 "vfu_virtio_create_scsi_endpoint", 00:04:46.044 "vfu_virtio_scsi_remove_target", 00:04:46.044 "vfu_virtio_scsi_add_target", 00:04:46.044 "vfu_virtio_create_blk_endpoint", 00:04:46.044 "vfu_virtio_delete_endpoint", 00:04:46.044 "keyring_file_remove_key", 00:04:46.044 "keyring_file_add_key", 00:04:46.044 "iscsi_get_histogram", 00:04:46.044 "iscsi_enable_histogram", 00:04:46.044 "iscsi_set_options", 00:04:46.044 "iscsi_get_auth_groups", 00:04:46.044 "iscsi_auth_group_remove_secret", 00:04:46.044 "iscsi_auth_group_add_secret", 00:04:46.044 "iscsi_delete_auth_group", 00:04:46.044 "iscsi_create_auth_group", 00:04:46.044 "iscsi_set_discovery_auth", 00:04:46.044 "iscsi_get_options", 00:04:46.044 "iscsi_target_node_request_logout", 00:04:46.044 "iscsi_target_node_set_redirect", 00:04:46.044 "iscsi_target_node_set_auth", 00:04:46.044 "iscsi_target_node_add_lun", 00:04:46.044 "iscsi_get_stats", 00:04:46.044 "iscsi_get_connections", 00:04:46.044 "iscsi_portal_group_set_auth", 00:04:46.044 "iscsi_start_portal_group", 00:04:46.044 "iscsi_delete_portal_group", 00:04:46.044 "iscsi_create_portal_group", 00:04:46.044 "iscsi_get_portal_groups", 00:04:46.044 "iscsi_delete_target_node", 00:04:46.044 "iscsi_target_node_remove_pg_ig_maps", 00:04:46.044 "iscsi_target_node_add_pg_ig_maps", 00:04:46.044 "iscsi_create_target_node", 00:04:46.044 "iscsi_get_target_nodes", 00:04:46.044 "iscsi_delete_initiator_group", 00:04:46.044 "iscsi_initiator_group_remove_initiators", 00:04:46.044 "iscsi_initiator_group_add_initiators", 00:04:46.044 "iscsi_create_initiator_group", 00:04:46.044 "iscsi_get_initiator_groups", 00:04:46.044 "nvmf_set_crdt", 00:04:46.044 "nvmf_set_config", 00:04:46.044 "nvmf_set_max_subsystems", 00:04:46.044 "nvmf_subsystem_get_listeners", 00:04:46.044 "nvmf_subsystem_get_qpairs", 00:04:46.044 "nvmf_subsystem_get_controllers", 00:04:46.044 "nvmf_get_stats", 00:04:46.044 "nvmf_get_transports", 00:04:46.044 "nvmf_create_transport", 00:04:46.044 "nvmf_get_targets", 00:04:46.044 "nvmf_delete_target", 00:04:46.044 "nvmf_create_target", 00:04:46.044 "nvmf_subsystem_allow_any_host", 00:04:46.044 "nvmf_subsystem_remove_host", 00:04:46.044 "nvmf_subsystem_add_host", 00:04:46.044 "nvmf_ns_remove_host", 00:04:46.044 "nvmf_ns_add_host", 00:04:46.044 "nvmf_subsystem_remove_ns", 00:04:46.044 "nvmf_subsystem_add_ns", 00:04:46.044 "nvmf_subsystem_listener_set_ana_state", 00:04:46.044 "nvmf_discovery_get_referrals", 00:04:46.044 "nvmf_discovery_remove_referral", 00:04:46.044 "nvmf_discovery_add_referral", 00:04:46.044 "nvmf_subsystem_remove_listener", 00:04:46.044 "nvmf_subsystem_add_listener", 00:04:46.044 "nvmf_delete_subsystem", 00:04:46.044 "nvmf_create_subsystem", 00:04:46.044 "nvmf_get_subsystems", 00:04:46.044 "env_dpdk_get_mem_stats", 00:04:46.044 "nbd_get_disks", 00:04:46.044 "nbd_stop_disk", 00:04:46.044 "nbd_start_disk", 00:04:46.044 "ublk_recover_disk", 00:04:46.044 "ublk_get_disks", 00:04:46.044 "ublk_stop_disk", 00:04:46.044 "ublk_start_disk", 00:04:46.044 "ublk_destroy_target", 00:04:46.044 "ublk_create_target", 00:04:46.044 "virtio_blk_create_transport", 00:04:46.044 "virtio_blk_get_transports", 00:04:46.044 "vhost_controller_set_coalescing", 00:04:46.044 "vhost_get_controllers", 00:04:46.044 "vhost_delete_controller", 00:04:46.044 "vhost_create_blk_controller", 00:04:46.044 "vhost_scsi_controller_remove_target", 00:04:46.044 "vhost_scsi_controller_add_target", 00:04:46.044 "vhost_start_scsi_controller", 00:04:46.044 "vhost_create_scsi_controller", 00:04:46.044 "thread_set_cpumask", 00:04:46.044 "framework_get_scheduler", 00:04:46.044 "framework_set_scheduler", 00:04:46.044 "framework_get_reactors", 00:04:46.044 "thread_get_io_channels", 00:04:46.044 "thread_get_pollers", 00:04:46.044 "thread_get_stats", 00:04:46.044 "framework_monitor_context_switch", 00:04:46.044 "spdk_kill_instance", 00:04:46.044 "log_enable_timestamps", 00:04:46.044 "log_get_flags", 00:04:46.044 "log_clear_flag", 00:04:46.044 "log_set_flag", 00:04:46.044 "log_get_level", 00:04:46.044 "log_set_level", 00:04:46.044 "log_get_print_level", 00:04:46.044 "log_set_print_level", 00:04:46.044 "framework_enable_cpumask_locks", 00:04:46.044 "framework_disable_cpumask_locks", 00:04:46.044 "framework_wait_init", 00:04:46.044 "framework_start_init", 00:04:46.044 "scsi_get_devices", 00:04:46.044 "bdev_get_histogram", 00:04:46.044 "bdev_enable_histogram", 00:04:46.044 "bdev_set_qos_limit", 00:04:46.044 "bdev_set_qd_sampling_period", 00:04:46.044 "bdev_get_bdevs", 00:04:46.044 "bdev_reset_iostat", 00:04:46.044 "bdev_get_iostat", 00:04:46.044 "bdev_examine", 00:04:46.044 "bdev_wait_for_examine", 00:04:46.044 "bdev_set_options", 00:04:46.044 "notify_get_notifications", 00:04:46.044 "notify_get_types", 00:04:46.044 "accel_get_stats", 00:04:46.044 "accel_set_options", 00:04:46.044 "accel_set_driver", 00:04:46.044 "accel_crypto_key_destroy", 00:04:46.044 "accel_crypto_keys_get", 00:04:46.044 "accel_crypto_key_create", 00:04:46.044 "accel_assign_opc", 00:04:46.044 "accel_get_module_info", 00:04:46.044 "accel_get_opc_assignments", 00:04:46.044 "vmd_rescan", 00:04:46.044 "vmd_remove_device", 00:04:46.044 "vmd_enable", 00:04:46.044 "sock_set_default_impl", 00:04:46.044 "sock_impl_set_options", 00:04:46.044 "sock_impl_get_options", 00:04:46.044 "iobuf_get_stats", 00:04:46.044 "iobuf_set_options", 00:04:46.044 "keyring_get_keys", 00:04:46.044 "framework_get_pci_devices", 00:04:46.044 "framework_get_config", 00:04:46.044 "framework_get_subsystems", 00:04:46.044 "vfu_tgt_set_base_path", 00:04:46.044 "trace_get_info", 00:04:46.044 "trace_get_tpoint_group_mask", 00:04:46.044 "trace_disable_tpoint_group", 00:04:46.044 "trace_enable_tpoint_group", 00:04:46.045 "trace_clear_tpoint_mask", 00:04:46.045 "trace_set_tpoint_mask", 00:04:46.045 "spdk_get_version", 00:04:46.045 "rpc_get_methods" 00:04:46.045 ] 00:04:46.045 19:35:27 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:46.045 19:35:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:46.045 19:35:27 -- common/autotest_common.sh@10 -- # set +x 00:04:46.045 19:35:27 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:46.045 19:35:27 -- spdkcli/tcp.sh@38 -- # killprocess 1583386 00:04:46.045 19:35:27 -- common/autotest_common.sh@936 -- # '[' -z 1583386 ']' 00:04:46.045 19:35:27 -- common/autotest_common.sh@940 -- # kill -0 1583386 00:04:46.045 19:35:27 -- common/autotest_common.sh@941 -- # uname 00:04:46.045 19:35:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:46.045 19:35:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1583386 00:04:46.304 19:35:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:46.304 19:35:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:46.304 19:35:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1583386' 00:04:46.304 killing process with pid 1583386 00:04:46.304 19:35:27 -- common/autotest_common.sh@955 -- # kill 1583386 00:04:46.304 19:35:27 -- common/autotest_common.sh@960 -- # wait 1583386 00:04:46.563 00:04:46.563 real 0m1.785s 00:04:46.563 user 0m3.421s 00:04:46.563 sys 0m0.460s 00:04:46.563 19:35:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:46.563 19:35:28 -- common/autotest_common.sh@10 -- # set +x 00:04:46.563 ************************************ 00:04:46.563 END TEST spdkcli_tcp 00:04:46.563 ************************************ 00:04:46.563 19:35:28 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.563 19:35:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.563 19:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.563 19:35:28 -- common/autotest_common.sh@10 -- # set +x 00:04:46.822 ************************************ 00:04:46.822 START TEST dpdk_mem_utility 00:04:46.822 ************************************ 00:04:46.822 19:35:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.822 * Looking for test storage... 00:04:46.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:46.822 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:46.822 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1583725 00:04:46.822 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.822 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1583725 00:04:46.822 19:35:28 -- common/autotest_common.sh@817 -- # '[' -z 1583725 ']' 00:04:46.822 19:35:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.822 19:35:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:46.822 19:35:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.822 19:35:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:46.822 19:35:28 -- common/autotest_common.sh@10 -- # set +x 00:04:46.822 [2024-04-24 19:35:28.255801] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:46.822 [2024-04-24 19:35:28.255906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583725 ] 00:04:46.822 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.822 [2024-04-24 19:35:28.313115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.081 [2024-04-24 19:35:28.420361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.342 19:35:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:47.342 19:35:28 -- common/autotest_common.sh@850 -- # return 0 00:04:47.342 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:47.342 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:47.342 19:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:47.342 19:35:28 -- common/autotest_common.sh@10 -- # set +x 00:04:47.342 { 00:04:47.342 "filename": "/tmp/spdk_mem_dump.txt" 00:04:47.342 } 00:04:47.342 19:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:47.342 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:47.342 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:47.342 1 heaps totaling size 814.000000 MiB 00:04:47.342 size: 814.000000 MiB heap id: 0 00:04:47.342 end heaps---------- 00:04:47.342 8 mempools totaling size 598.116089 MiB 00:04:47.342 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:47.342 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:47.342 size: 84.521057 MiB name: bdev_io_1583725 00:04:47.342 size: 51.011292 MiB name: evtpool_1583725 00:04:47.342 size: 50.003479 MiB name: msgpool_1583725 00:04:47.342 size: 21.763794 MiB name: PDU_Pool 00:04:47.342 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:47.342 size: 0.026123 MiB name: Session_Pool 00:04:47.342 end mempools------- 00:04:47.342 6 memzones totaling size 4.142822 MiB 00:04:47.342 size: 1.000366 MiB name: RG_ring_0_1583725 00:04:47.342 size: 1.000366 MiB name: RG_ring_1_1583725 00:04:47.342 size: 1.000366 MiB name: RG_ring_4_1583725 00:04:47.342 size: 1.000366 MiB name: RG_ring_5_1583725 00:04:47.342 size: 0.125366 MiB name: RG_ring_2_1583725 00:04:47.342 size: 0.015991 MiB name: RG_ring_3_1583725 00:04:47.342 end memzones------- 00:04:47.342 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:47.342 heap id: 0 total size: 814.000000 MiB number of busy elements: 42 number of free elements: 15 00:04:47.342 list of free elements. size: 12.517212 MiB 00:04:47.342 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:47.342 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:47.342 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:47.342 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:47.342 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:47.342 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:47.342 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:47.342 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:47.342 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:47.342 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:47.342 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:47.342 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:47.342 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:47.342 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:47.342 element at address: 0x200003a00000 with size: 0.353394 MiB 00:04:47.342 list of standard malloc elements. size: 199.220215 MiB 00:04:47.342 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:47.342 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:47.342 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:47.342 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:47.342 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:47.342 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:47.342 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:47.342 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:47.342 element at address: 0x200003aff280 with size: 0.002136 MiB 00:04:47.342 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:47.342 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:47.342 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:47.342 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200003a5a780 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200003adaa40 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200003adac40 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200003adef00 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200003aff1c0 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:47.342 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:47.342 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:47.342 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:47.342 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:47.342 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:47.342 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:47.342 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:47.342 list of memzone associated elements. size: 602.262573 MiB 00:04:47.342 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:47.342 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:47.342 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:47.342 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:47.342 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:47.342 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1583725_0 00:04:47.342 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:47.342 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1583725_0 00:04:47.342 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:47.342 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1583725_0 00:04:47.342 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:47.342 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:47.342 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:47.342 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:47.342 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:47.342 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1583725 00:04:47.342 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:47.342 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1583725 00:04:47.342 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:47.342 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1583725 00:04:47.342 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:47.342 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:47.343 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:47.343 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:47.343 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:47.343 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:47.343 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:47.343 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:47.343 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:47.343 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1583725 00:04:47.343 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:47.343 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1583725 00:04:47.343 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:47.343 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1583725 00:04:47.343 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:47.343 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1583725 00:04:47.343 element at address: 0x200003a5a840 with size: 0.500488 MiB 00:04:47.343 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1583725 00:04:47.343 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:47.343 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:47.343 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:47.343 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:47.343 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:47.343 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:47.343 element at address: 0x200003adefc0 with size: 0.125488 MiB 00:04:47.343 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1583725 00:04:47.343 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:47.343 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:47.343 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:47.343 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:47.343 element at address: 0x200003adad00 with size: 0.016113 MiB 00:04:47.343 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1583725 00:04:47.343 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:47.343 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:47.343 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:47.343 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1583725 00:04:47.343 element at address: 0x200003adab00 with size: 0.000305 MiB 00:04:47.343 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1583725 00:04:47.343 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:47.343 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:47.343 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:47.343 19:35:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1583725 00:04:47.343 19:35:28 -- common/autotest_common.sh@936 -- # '[' -z 1583725 ']' 00:04:47.343 19:35:28 -- common/autotest_common.sh@940 -- # kill -0 1583725 00:04:47.343 19:35:28 -- common/autotest_common.sh@941 -- # uname 00:04:47.343 19:35:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:47.343 19:35:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1583725 00:04:47.343 19:35:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:47.343 19:35:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:47.343 19:35:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1583725' 00:04:47.343 killing process with pid 1583725 00:04:47.343 19:35:28 -- common/autotest_common.sh@955 -- # kill 1583725 00:04:47.343 19:35:28 -- common/autotest_common.sh@960 -- # wait 1583725 00:04:47.909 00:04:47.909 real 0m1.128s 00:04:47.909 user 0m1.084s 00:04:47.909 sys 0m0.413s 00:04:47.909 19:35:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.909 19:35:29 -- common/autotest_common.sh@10 -- # set +x 00:04:47.909 ************************************ 00:04:47.909 END TEST dpdk_mem_utility 00:04:47.909 ************************************ 00:04:47.909 19:35:29 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:47.909 19:35:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.909 19:35:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.909 19:35:29 -- common/autotest_common.sh@10 -- # set +x 00:04:47.909 ************************************ 00:04:47.909 START TEST event 00:04:47.909 ************************************ 00:04:47.909 19:35:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:48.169 * Looking for test storage... 00:04:48.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:48.169 19:35:29 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:48.169 19:35:29 -- bdev/nbd_common.sh@6 -- # set -e 00:04:48.169 19:35:29 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.169 19:35:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:48.169 19:35:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.169 19:35:29 -- common/autotest_common.sh@10 -- # set +x 00:04:48.169 ************************************ 00:04:48.169 START TEST event_perf 00:04:48.169 ************************************ 00:04:48.169 19:35:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.169 Running I/O for 1 seconds...[2024-04-24 19:35:29.569273] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:48.169 [2024-04-24 19:35:29.569339] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583937 ] 00:04:48.169 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.169 [2024-04-24 19:35:29.638865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.429 [2024-04-24 19:35:29.761433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.429 [2024-04-24 19:35:29.761504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.429 [2024-04-24 19:35:29.761597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.429 [2024-04-24 19:35:29.761599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.803 Running I/O for 1 seconds... 00:04:49.803 lcore 0: 228314 00:04:49.803 lcore 1: 228313 00:04:49.803 lcore 2: 228313 00:04:49.803 lcore 3: 228312 00:04:49.803 done. 00:04:49.803 00:04:49.803 real 0m1.333s 00:04:49.803 user 0m4.237s 00:04:49.803 sys 0m0.087s 00:04:49.803 19:35:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.803 19:35:30 -- common/autotest_common.sh@10 -- # set +x 00:04:49.803 ************************************ 00:04:49.803 END TEST event_perf 00:04:49.803 ************************************ 00:04:49.803 19:35:30 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:49.803 19:35:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:49.803 19:35:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.803 19:35:30 -- common/autotest_common.sh@10 -- # set +x 00:04:49.803 ************************************ 00:04:49.803 START TEST event_reactor 00:04:49.803 ************************************ 00:04:49.803 19:35:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:49.803 [2024-04-24 19:35:31.022006] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:49.803 [2024-04-24 19:35:31.022066] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584098 ] 00:04:49.803 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.803 [2024-04-24 19:35:31.088196] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.803 [2024-04-24 19:35:31.203361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.181 test_start 00:04:51.181 oneshot 00:04:51.181 tick 100 00:04:51.181 tick 100 00:04:51.181 tick 250 00:04:51.181 tick 100 00:04:51.181 tick 100 00:04:51.181 tick 100 00:04:51.181 tick 250 00:04:51.181 tick 500 00:04:51.181 tick 100 00:04:51.181 tick 100 00:04:51.181 tick 250 00:04:51.181 tick 100 00:04:51.181 tick 100 00:04:51.181 test_end 00:04:51.181 00:04:51.181 real 0m1.317s 00:04:51.181 user 0m1.233s 00:04:51.181 sys 0m0.079s 00:04:51.181 19:35:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.181 19:35:32 -- common/autotest_common.sh@10 -- # set +x 00:04:51.181 ************************************ 00:04:51.181 END TEST event_reactor 00:04:51.181 ************************************ 00:04:51.181 19:35:32 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.181 19:35:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:51.181 19:35:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.181 19:35:32 -- common/autotest_common.sh@10 -- # set +x 00:04:51.181 ************************************ 00:04:51.181 START TEST event_reactor_perf 00:04:51.181 ************************************ 00:04:51.181 19:35:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.181 [2024-04-24 19:35:32.460853] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:51.181 [2024-04-24 19:35:32.460915] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584269 ] 00:04:51.181 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.181 [2024-04-24 19:35:32.523925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.181 [2024-04-24 19:35:32.645181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.632 test_start 00:04:52.632 test_end 00:04:52.632 Performance: 352477 events per second 00:04:52.632 00:04:52.632 real 0m1.321s 00:04:52.632 user 0m1.236s 00:04:52.632 sys 0m0.079s 00:04:52.632 19:35:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.632 19:35:33 -- common/autotest_common.sh@10 -- # set +x 00:04:52.632 ************************************ 00:04:52.632 END TEST event_reactor_perf 00:04:52.632 ************************************ 00:04:52.632 19:35:33 -- event/event.sh@49 -- # uname -s 00:04:52.632 19:35:33 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.632 19:35:33 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:52.632 19:35:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.632 19:35:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.632 19:35:33 -- common/autotest_common.sh@10 -- # set +x 00:04:52.632 ************************************ 00:04:52.632 START TEST event_scheduler 00:04:52.632 ************************************ 00:04:52.632 19:35:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:52.632 * Looking for test storage... 00:04:52.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:52.632 19:35:33 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.632 19:35:33 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1584575 00:04:52.632 19:35:33 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.632 19:35:33 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.632 19:35:33 -- scheduler/scheduler.sh@37 -- # waitforlisten 1584575 00:04:52.632 19:35:33 -- common/autotest_common.sh@817 -- # '[' -z 1584575 ']' 00:04:52.632 19:35:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.632 19:35:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:52.632 19:35:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.632 19:35:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:52.632 19:35:33 -- common/autotest_common.sh@10 -- # set +x 00:04:52.632 [2024-04-24 19:35:33.981435] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:52.632 [2024-04-24 19:35:33.981510] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584575 ] 00:04:52.632 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.632 [2024-04-24 19:35:34.039264] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.891 [2024-04-24 19:35:34.147865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.891 [2024-04-24 19:35:34.147924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.891 [2024-04-24 19:35:34.147992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.891 [2024-04-24 19:35:34.147997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.891 19:35:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:52.891 19:35:34 -- common/autotest_common.sh@850 -- # return 0 00:04:52.891 19:35:34 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:52.891 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.891 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:52.891 POWER: Env isn't set yet! 00:04:52.891 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:52.891 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:52.891 POWER: Cannot get available frequencies of lcore 0 00:04:52.891 POWER: Attempting to initialise PSTAT power management... 00:04:52.891 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:52.891 POWER: Initialized successfully for lcore 0 power management 00:04:52.891 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:52.891 POWER: Initialized successfully for lcore 1 power management 00:04:52.891 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:52.891 POWER: Initialized successfully for lcore 2 power management 00:04:52.891 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:52.891 POWER: Initialized successfully for lcore 3 power management 00:04:52.891 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.891 19:35:34 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:52.891 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.891 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:52.891 [2024-04-24 19:35:34.308156] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:52.891 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.891 19:35:34 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:52.891 19:35:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.891 19:35:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.891 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 ************************************ 00:04:53.150 START TEST scheduler_create_thread 00:04:53.150 ************************************ 00:04:53.150 19:35:34 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 2 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 3 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 4 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 5 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 6 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 7 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 8 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 9 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 10 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:53.150 19:35:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.150 19:35:34 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:53.150 19:35:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.150 19:35:34 -- common/autotest_common.sh@10 -- # set +x 00:04:54.528 19:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.528 19:35:35 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:54.528 19:35:35 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:54.528 19:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.528 19:35:35 -- common/autotest_common.sh@10 -- # set +x 00:04:55.909 19:35:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.909 00:04:55.909 real 0m2.619s 00:04:55.909 user 0m0.009s 00:04:55.909 sys 0m0.005s 00:04:55.909 19:35:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.909 19:35:37 -- common/autotest_common.sh@10 -- # set +x 00:04:55.909 ************************************ 00:04:55.909 END TEST scheduler_create_thread 00:04:55.909 ************************************ 00:04:55.909 19:35:37 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:55.909 19:35:37 -- scheduler/scheduler.sh@46 -- # killprocess 1584575 00:04:55.909 19:35:37 -- common/autotest_common.sh@936 -- # '[' -z 1584575 ']' 00:04:55.909 19:35:37 -- common/autotest_common.sh@940 -- # kill -0 1584575 00:04:55.909 19:35:37 -- common/autotest_common.sh@941 -- # uname 00:04:55.909 19:35:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:55.909 19:35:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1584575 00:04:55.909 19:35:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:55.909 19:35:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:55.909 19:35:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1584575' 00:04:55.909 killing process with pid 1584575 00:04:55.909 19:35:37 -- common/autotest_common.sh@955 -- # kill 1584575 00:04:55.909 19:35:37 -- common/autotest_common.sh@960 -- # wait 1584575 00:04:56.168 [2024-04-24 19:35:37.511059] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:56.168 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:56.168 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:56.168 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:56.168 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:56.168 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:56.168 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:56.168 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:56.168 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:56.427 00:04:56.427 real 0m3.895s 00:04:56.427 user 0m5.866s 00:04:56.427 sys 0m0.375s 00:04:56.427 19:35:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.427 19:35:37 -- common/autotest_common.sh@10 -- # set +x 00:04:56.427 ************************************ 00:04:56.427 END TEST event_scheduler 00:04:56.427 ************************************ 00:04:56.427 19:35:37 -- event/event.sh@51 -- # modprobe -n nbd 00:04:56.427 19:35:37 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:56.427 19:35:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.427 19:35:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.427 19:35:37 -- common/autotest_common.sh@10 -- # set +x 00:04:56.427 ************************************ 00:04:56.427 START TEST app_repeat 00:04:56.427 ************************************ 00:04:56.427 19:35:37 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:56.427 19:35:37 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.427 19:35:37 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.427 19:35:37 -- event/event.sh@13 -- # local nbd_list 00:04:56.427 19:35:37 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.427 19:35:37 -- event/event.sh@14 -- # local bdev_list 00:04:56.427 19:35:37 -- event/event.sh@15 -- # local repeat_times=4 00:04:56.427 19:35:37 -- event/event.sh@17 -- # modprobe nbd 00:04:56.427 19:35:37 -- event/event.sh@19 -- # repeat_pid=1585045 00:04:56.427 19:35:37 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:56.427 19:35:37 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.427 19:35:37 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1585045' 00:04:56.427 Process app_repeat pid: 1585045 00:04:56.427 19:35:37 -- event/event.sh@23 -- # for i in {0..2} 00:04:56.427 19:35:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:56.427 spdk_app_start Round 0 00:04:56.427 19:35:37 -- event/event.sh@25 -- # waitforlisten 1585045 /var/tmp/spdk-nbd.sock 00:04:56.427 19:35:37 -- common/autotest_common.sh@817 -- # '[' -z 1585045 ']' 00:04:56.427 19:35:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.427 19:35:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:56.427 19:35:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.427 19:35:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:56.427 19:35:37 -- common/autotest_common.sh@10 -- # set +x 00:04:56.427 [2024-04-24 19:35:37.934510] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:04:56.427 [2024-04-24 19:35:37.934563] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585045 ] 00:04:56.687 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.687 [2024-04-24 19:35:37.991051] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.687 [2024-04-24 19:35:38.105213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.687 [2024-04-24 19:35:38.105220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.946 19:35:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:56.946 19:35:38 -- common/autotest_common.sh@850 -- # return 0 00:04:56.946 19:35:38 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.207 Malloc0 00:04:57.207 19:35:38 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.466 Malloc1 00:04:57.466 19:35:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@12 -- # local i 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.466 19:35:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.724 /dev/nbd0 00:04:57.724 19:35:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.724 19:35:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.724 19:35:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:57.724 19:35:38 -- common/autotest_common.sh@855 -- # local i 00:04:57.724 19:35:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:57.724 19:35:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:57.724 19:35:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:57.724 19:35:38 -- common/autotest_common.sh@859 -- # break 00:04:57.724 19:35:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:57.724 19:35:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:57.724 19:35:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.724 1+0 records in 00:04:57.724 1+0 records out 00:04:57.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190391 s, 21.5 MB/s 00:04:57.724 19:35:39 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.724 19:35:39 -- common/autotest_common.sh@872 -- # size=4096 00:04:57.724 19:35:39 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.724 19:35:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:57.724 19:35:39 -- common/autotest_common.sh@875 -- # return 0 00:04:57.724 19:35:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.724 19:35:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.724 19:35:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.983 /dev/nbd1 00:04:57.983 19:35:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.983 19:35:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.983 19:35:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:57.983 19:35:39 -- common/autotest_common.sh@855 -- # local i 00:04:57.983 19:35:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:57.983 19:35:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:57.983 19:35:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:57.983 19:35:39 -- common/autotest_common.sh@859 -- # break 00:04:57.983 19:35:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:57.983 19:35:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:57.983 19:35:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.983 1+0 records in 00:04:57.983 1+0 records out 00:04:57.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201698 s, 20.3 MB/s 00:04:57.983 19:35:39 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.983 19:35:39 -- common/autotest_common.sh@872 -- # size=4096 00:04:57.983 19:35:39 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.983 19:35:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:57.983 19:35:39 -- common/autotest_common.sh@875 -- # return 0 00:04:57.983 19:35:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.983 19:35:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.983 19:35:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.983 19:35:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.983 19:35:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.242 { 00:04:58.242 "nbd_device": "/dev/nbd0", 00:04:58.242 "bdev_name": "Malloc0" 00:04:58.242 }, 00:04:58.242 { 00:04:58.242 "nbd_device": "/dev/nbd1", 00:04:58.242 "bdev_name": "Malloc1" 00:04:58.242 } 00:04:58.242 ]' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.242 { 00:04:58.242 "nbd_device": "/dev/nbd0", 00:04:58.242 "bdev_name": "Malloc0" 00:04:58.242 }, 00:04:58.242 { 00:04:58.242 "nbd_device": "/dev/nbd1", 00:04:58.242 "bdev_name": "Malloc1" 00:04:58.242 } 00:04:58.242 ]' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.242 /dev/nbd1' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.242 /dev/nbd1' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.242 256+0 records in 00:04:58.242 256+0 records out 00:04:58.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498576 s, 210 MB/s 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.242 256+0 records in 00:04:58.242 256+0 records out 00:04:58.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241745 s, 43.4 MB/s 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.242 256+0 records in 00:04:58.242 256+0 records out 00:04:58.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253611 s, 41.3 MB/s 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@51 -- # local i 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.242 19:35:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@41 -- # break 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.500 19:35:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.758 19:35:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.758 19:35:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.758 19:35:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.758 19:35:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.758 19:35:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.758 19:35:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.758 19:35:40 -- bdev/nbd_common.sh@41 -- # break 00:04:58.758 19:35:40 -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.759 19:35:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.759 19:35:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.759 19:35:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@65 -- # true 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.022 19:35:40 -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.022 19:35:40 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.280 19:35:40 -- event/event.sh@35 -- # sleep 3 00:04:59.540 [2024-04-24 19:35:41.036098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.798 [2024-04-24 19:35:41.151845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.798 [2024-04-24 19:35:41.151845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.798 [2024-04-24 19:35:41.213649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.798 [2024-04-24 19:35:41.213734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.336 19:35:43 -- event/event.sh@23 -- # for i in {0..2} 00:05:02.336 19:35:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:02.336 spdk_app_start Round 1 00:05:02.336 19:35:43 -- event/event.sh@25 -- # waitforlisten 1585045 /var/tmp/spdk-nbd.sock 00:05:02.336 19:35:43 -- common/autotest_common.sh@817 -- # '[' -z 1585045 ']' 00:05:02.336 19:35:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.336 19:35:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:02.336 19:35:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.336 19:35:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:02.336 19:35:43 -- common/autotest_common.sh@10 -- # set +x 00:05:02.595 19:35:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:02.595 19:35:44 -- common/autotest_common.sh@850 -- # return 0 00:05:02.595 19:35:44 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.853 Malloc0 00:05:02.853 19:35:44 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.112 Malloc1 00:05:03.112 19:35:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@12 -- # local i 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.112 19:35:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.370 /dev/nbd0 00:05:03.370 19:35:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.370 19:35:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.370 19:35:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:03.370 19:35:44 -- common/autotest_common.sh@855 -- # local i 00:05:03.370 19:35:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:03.370 19:35:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:03.370 19:35:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:03.370 19:35:44 -- common/autotest_common.sh@859 -- # break 00:05:03.370 19:35:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:03.370 19:35:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:03.370 19:35:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.370 1+0 records in 00:05:03.370 1+0 records out 00:05:03.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199424 s, 20.5 MB/s 00:05:03.370 19:35:44 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.370 19:35:44 -- common/autotest_common.sh@872 -- # size=4096 00:05:03.370 19:35:44 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.370 19:35:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:03.370 19:35:44 -- common/autotest_common.sh@875 -- # return 0 00:05:03.370 19:35:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.370 19:35:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.370 19:35:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.629 /dev/nbd1 00:05:03.629 19:35:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.629 19:35:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.629 19:35:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:03.629 19:35:45 -- common/autotest_common.sh@855 -- # local i 00:05:03.629 19:35:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:03.629 19:35:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:03.629 19:35:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:03.629 19:35:45 -- common/autotest_common.sh@859 -- # break 00:05:03.629 19:35:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:03.629 19:35:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:03.629 19:35:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.629 1+0 records in 00:05:03.629 1+0 records out 00:05:03.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183776 s, 22.3 MB/s 00:05:03.629 19:35:45 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.629 19:35:45 -- common/autotest_common.sh@872 -- # size=4096 00:05:03.629 19:35:45 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.629 19:35:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:03.629 19:35:45 -- common/autotest_common.sh@875 -- # return 0 00:05:03.629 19:35:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.629 19:35:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.629 19:35:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.629 19:35:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.629 19:35:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.887 19:35:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.887 { 00:05:03.887 "nbd_device": "/dev/nbd0", 00:05:03.887 "bdev_name": "Malloc0" 00:05:03.887 }, 00:05:03.887 { 00:05:03.887 "nbd_device": "/dev/nbd1", 00:05:03.887 "bdev_name": "Malloc1" 00:05:03.887 } 00:05:03.887 ]' 00:05:03.887 19:35:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.887 { 00:05:03.888 "nbd_device": "/dev/nbd0", 00:05:03.888 "bdev_name": "Malloc0" 00:05:03.888 }, 00:05:03.888 { 00:05:03.888 "nbd_device": "/dev/nbd1", 00:05:03.888 "bdev_name": "Malloc1" 00:05:03.888 } 00:05:03.888 ]' 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.888 /dev/nbd1' 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.888 /dev/nbd1' 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.888 256+0 records in 00:05:03.888 256+0 records out 00:05:03.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051499 s, 204 MB/s 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.888 19:35:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.146 256+0 records in 00:05:04.146 256+0 records out 00:05:04.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242486 s, 43.2 MB/s 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.146 256+0 records in 00:05:04.146 256+0 records out 00:05:04.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250953 s, 41.8 MB/s 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@51 -- # local i 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.146 19:35:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@41 -- # break 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.404 19:35:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@41 -- # break 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.661 19:35:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@65 -- # true 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.929 19:35:46 -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.929 19:35:46 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.227 19:35:46 -- event/event.sh@35 -- # sleep 3 00:05:05.487 [2024-04-24 19:35:46.819976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.487 [2024-04-24 19:35:46.944940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.487 [2024-04-24 19:35:46.944945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.745 [2024-04-24 19:35:47.008182] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.745 [2024-04-24 19:35:47.008259] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.281 19:35:49 -- event/event.sh@23 -- # for i in {0..2} 00:05:08.281 19:35:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:08.281 spdk_app_start Round 2 00:05:08.281 19:35:49 -- event/event.sh@25 -- # waitforlisten 1585045 /var/tmp/spdk-nbd.sock 00:05:08.281 19:35:49 -- common/autotest_common.sh@817 -- # '[' -z 1585045 ']' 00:05:08.281 19:35:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.281 19:35:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.281 19:35:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.281 19:35:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.281 19:35:49 -- common/autotest_common.sh@10 -- # set +x 00:05:08.540 19:35:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:08.540 19:35:49 -- common/autotest_common.sh@850 -- # return 0 00:05:08.540 19:35:49 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.540 Malloc0 00:05:08.799 19:35:50 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.799 Malloc1 00:05:09.057 19:35:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@12 -- # local i 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.057 19:35:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.058 19:35:50 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.058 /dev/nbd0 00:05:09.058 19:35:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.316 19:35:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.316 19:35:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:09.316 19:35:50 -- common/autotest_common.sh@855 -- # local i 00:05:09.316 19:35:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:09.316 19:35:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:09.316 19:35:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:09.316 19:35:50 -- common/autotest_common.sh@859 -- # break 00:05:09.316 19:35:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:09.316 19:35:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:09.316 19:35:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.316 1+0 records in 00:05:09.316 1+0 records out 00:05:09.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208719 s, 19.6 MB/s 00:05:09.316 19:35:50 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.316 19:35:50 -- common/autotest_common.sh@872 -- # size=4096 00:05:09.316 19:35:50 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.316 19:35:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:09.316 19:35:50 -- common/autotest_common.sh@875 -- # return 0 00:05:09.316 19:35:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.316 19:35:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.316 19:35:50 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.574 /dev/nbd1 00:05:09.574 19:35:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.574 19:35:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.574 19:35:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:09.574 19:35:50 -- common/autotest_common.sh@855 -- # local i 00:05:09.574 19:35:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:09.575 19:35:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:09.575 19:35:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:09.575 19:35:50 -- common/autotest_common.sh@859 -- # break 00:05:09.575 19:35:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:09.575 19:35:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:09.575 19:35:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.575 1+0 records in 00:05:09.575 1+0 records out 00:05:09.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190587 s, 21.5 MB/s 00:05:09.575 19:35:50 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.575 19:35:50 -- common/autotest_common.sh@872 -- # size=4096 00:05:09.575 19:35:50 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.575 19:35:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:09.575 19:35:50 -- common/autotest_common.sh@875 -- # return 0 00:05:09.575 19:35:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.575 19:35:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.575 19:35:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.575 19:35:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.575 19:35:50 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.832 { 00:05:09.832 "nbd_device": "/dev/nbd0", 00:05:09.832 "bdev_name": "Malloc0" 00:05:09.832 }, 00:05:09.832 { 00:05:09.832 "nbd_device": "/dev/nbd1", 00:05:09.832 "bdev_name": "Malloc1" 00:05:09.832 } 00:05:09.832 ]' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.832 { 00:05:09.832 "nbd_device": "/dev/nbd0", 00:05:09.832 "bdev_name": "Malloc0" 00:05:09.832 }, 00:05:09.832 { 00:05:09.832 "nbd_device": "/dev/nbd1", 00:05:09.832 "bdev_name": "Malloc1" 00:05:09.832 } 00:05:09.832 ]' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.832 /dev/nbd1' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.832 /dev/nbd1' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.832 256+0 records in 00:05:09.832 256+0 records out 00:05:09.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429308 s, 244 MB/s 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.832 256+0 records in 00:05:09.832 256+0 records out 00:05:09.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222407 s, 47.1 MB/s 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.832 256+0 records in 00:05:09.832 256+0 records out 00:05:09.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257767 s, 40.7 MB/s 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@51 -- # local i 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.832 19:35:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@41 -- # break 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.090 19:35:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@41 -- # break 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.347 19:35:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@65 -- # true 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.605 19:35:52 -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.605 19:35:52 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.863 19:35:52 -- event/event.sh@35 -- # sleep 3 00:05:11.121 [2024-04-24 19:35:52.593366] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.379 [2024-04-24 19:35:52.708767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.379 [2024-04-24 19:35:52.708767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.379 [2024-04-24 19:35:52.771404] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.379 [2024-04-24 19:35:52.771480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.909 19:35:55 -- event/event.sh@38 -- # waitforlisten 1585045 /var/tmp/spdk-nbd.sock 00:05:13.909 19:35:55 -- common/autotest_common.sh@817 -- # '[' -z 1585045 ']' 00:05:13.909 19:35:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.909 19:35:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:13.909 19:35:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.909 19:35:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:13.909 19:35:55 -- common/autotest_common.sh@10 -- # set +x 00:05:14.167 19:35:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:14.167 19:35:55 -- common/autotest_common.sh@850 -- # return 0 00:05:14.167 19:35:55 -- event/event.sh@39 -- # killprocess 1585045 00:05:14.167 19:35:55 -- common/autotest_common.sh@936 -- # '[' -z 1585045 ']' 00:05:14.167 19:35:55 -- common/autotest_common.sh@940 -- # kill -0 1585045 00:05:14.167 19:35:55 -- common/autotest_common.sh@941 -- # uname 00:05:14.167 19:35:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.167 19:35:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1585045 00:05:14.167 19:35:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.167 19:35:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.167 19:35:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1585045' 00:05:14.167 killing process with pid 1585045 00:05:14.167 19:35:55 -- common/autotest_common.sh@955 -- # kill 1585045 00:05:14.167 19:35:55 -- common/autotest_common.sh@960 -- # wait 1585045 00:05:14.425 spdk_app_start is called in Round 0. 00:05:14.425 Shutdown signal received, stop current app iteration 00:05:14.425 Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 reinitialization... 00:05:14.425 spdk_app_start is called in Round 1. 00:05:14.425 Shutdown signal received, stop current app iteration 00:05:14.425 Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 reinitialization... 00:05:14.425 spdk_app_start is called in Round 2. 00:05:14.425 Shutdown signal received, stop current app iteration 00:05:14.425 Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 reinitialization... 00:05:14.425 spdk_app_start is called in Round 3. 00:05:14.425 Shutdown signal received, stop current app iteration 00:05:14.425 19:35:55 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:14.425 19:35:55 -- event/event.sh@42 -- # return 0 00:05:14.425 00:05:14.425 real 0m17.927s 00:05:14.425 user 0m38.683s 00:05:14.425 sys 0m3.193s 00:05:14.425 19:35:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.425 19:35:55 -- common/autotest_common.sh@10 -- # set +x 00:05:14.425 ************************************ 00:05:14.425 END TEST app_repeat 00:05:14.425 ************************************ 00:05:14.425 19:35:55 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:14.425 19:35:55 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:14.425 19:35:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.425 19:35:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.425 19:35:55 -- common/autotest_common.sh@10 -- # set +x 00:05:14.682 ************************************ 00:05:14.682 START TEST cpu_locks 00:05:14.682 ************************************ 00:05:14.682 19:35:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:14.682 * Looking for test storage... 00:05:14.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:14.683 19:35:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:14.683 19:35:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:14.683 19:35:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:14.683 19:35:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:14.683 19:35:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.683 19:35:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.683 19:35:56 -- common/autotest_common.sh@10 -- # set +x 00:05:14.683 ************************************ 00:05:14.683 START TEST default_locks 00:05:14.683 ************************************ 00:05:14.683 19:35:56 -- common/autotest_common.sh@1111 -- # default_locks 00:05:14.683 19:35:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1587414 00:05:14.683 19:35:56 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.683 19:35:56 -- event/cpu_locks.sh@47 -- # waitforlisten 1587414 00:05:14.683 19:35:56 -- common/autotest_common.sh@817 -- # '[' -z 1587414 ']' 00:05:14.683 19:35:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.683 19:35:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.683 19:35:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.683 19:35:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.683 19:35:56 -- common/autotest_common.sh@10 -- # set +x 00:05:14.683 [2024-04-24 19:35:56.156325] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:14.683 [2024-04-24 19:35:56.156400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587414 ] 00:05:14.683 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.941 [2024-04-24 19:35:56.219622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.941 [2024-04-24 19:35:56.327599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.199 19:35:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.199 19:35:56 -- common/autotest_common.sh@850 -- # return 0 00:05:15.199 19:35:56 -- event/cpu_locks.sh@49 -- # locks_exist 1587414 00:05:15.199 19:35:56 -- event/cpu_locks.sh@22 -- # lslocks -p 1587414 00:05:15.199 19:35:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.456 lslocks: write error 00:05:15.456 19:35:56 -- event/cpu_locks.sh@50 -- # killprocess 1587414 00:05:15.456 19:35:56 -- common/autotest_common.sh@936 -- # '[' -z 1587414 ']' 00:05:15.456 19:35:56 -- common/autotest_common.sh@940 -- # kill -0 1587414 00:05:15.456 19:35:56 -- common/autotest_common.sh@941 -- # uname 00:05:15.456 19:35:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.456 19:35:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1587414 00:05:15.456 19:35:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.456 19:35:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.456 19:35:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1587414' 00:05:15.456 killing process with pid 1587414 00:05:15.456 19:35:56 -- common/autotest_common.sh@955 -- # kill 1587414 00:05:15.456 19:35:56 -- common/autotest_common.sh@960 -- # wait 1587414 00:05:16.021 19:35:57 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1587414 00:05:16.021 19:35:57 -- common/autotest_common.sh@638 -- # local es=0 00:05:16.021 19:35:57 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1587414 00:05:16.021 19:35:57 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:16.021 19:35:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:16.021 19:35:57 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:16.021 19:35:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:16.021 19:35:57 -- common/autotest_common.sh@641 -- # waitforlisten 1587414 00:05:16.021 19:35:57 -- common/autotest_common.sh@817 -- # '[' -z 1587414 ']' 00:05:16.021 19:35:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.021 19:35:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:16.021 19:35:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.021 19:35:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:16.021 19:35:57 -- common/autotest_common.sh@10 -- # set +x 00:05:16.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1587414) - No such process 00:05:16.021 ERROR: process (pid: 1587414) is no longer running 00:05:16.021 19:35:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.021 19:35:57 -- common/autotest_common.sh@850 -- # return 1 00:05:16.021 19:35:57 -- common/autotest_common.sh@641 -- # es=1 00:05:16.021 19:35:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:16.021 19:35:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:16.021 19:35:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:16.021 19:35:57 -- event/cpu_locks.sh@54 -- # no_locks 00:05:16.021 19:35:57 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:16.021 19:35:57 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:16.021 19:35:57 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:16.021 00:05:16.021 real 0m1.299s 00:05:16.021 user 0m1.213s 00:05:16.021 sys 0m0.517s 00:05:16.021 19:35:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.021 19:35:57 -- common/autotest_common.sh@10 -- # set +x 00:05:16.021 ************************************ 00:05:16.021 END TEST default_locks 00:05:16.021 ************************************ 00:05:16.021 19:35:57 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:16.021 19:35:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.021 19:35:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.021 19:35:57 -- common/autotest_common.sh@10 -- # set +x 00:05:16.021 ************************************ 00:05:16.021 START TEST default_locks_via_rpc 00:05:16.021 ************************************ 00:05:16.021 19:35:57 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:16.021 19:35:57 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1587699 00:05:16.021 19:35:57 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.021 19:35:57 -- event/cpu_locks.sh@63 -- # waitforlisten 1587699 00:05:16.021 19:35:57 -- common/autotest_common.sh@817 -- # '[' -z 1587699 ']' 00:05:16.021 19:35:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.021 19:35:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:16.021 19:35:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.021 19:35:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:16.021 19:35:57 -- common/autotest_common.sh@10 -- # set +x 00:05:16.279 [2024-04-24 19:35:57.578346] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:16.279 [2024-04-24 19:35:57.578433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587699 ] 00:05:16.279 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.279 [2024-04-24 19:35:57.635001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.279 [2024-04-24 19:35:57.744536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.538 19:35:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.538 19:35:58 -- common/autotest_common.sh@850 -- # return 0 00:05:16.538 19:35:58 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:16.538 19:35:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.538 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:16.538 19:35:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.538 19:35:58 -- event/cpu_locks.sh@67 -- # no_locks 00:05:16.538 19:35:58 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:16.538 19:35:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:16.538 19:35:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:16.538 19:35:58 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:16.538 19:35:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.538 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:16.538 19:35:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.538 19:35:58 -- event/cpu_locks.sh@71 -- # locks_exist 1587699 00:05:16.538 19:35:58 -- event/cpu_locks.sh@22 -- # lslocks -p 1587699 00:05:16.538 19:35:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.796 19:35:58 -- event/cpu_locks.sh@73 -- # killprocess 1587699 00:05:16.796 19:35:58 -- common/autotest_common.sh@936 -- # '[' -z 1587699 ']' 00:05:16.796 19:35:58 -- common/autotest_common.sh@940 -- # kill -0 1587699 00:05:16.796 19:35:58 -- common/autotest_common.sh@941 -- # uname 00:05:16.796 19:35:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.796 19:35:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1587699 00:05:16.796 19:35:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.796 19:35:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.796 19:35:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1587699' 00:05:16.796 killing process with pid 1587699 00:05:16.796 19:35:58 -- common/autotest_common.sh@955 -- # kill 1587699 00:05:16.796 19:35:58 -- common/autotest_common.sh@960 -- # wait 1587699 00:05:17.362 00:05:17.362 real 0m1.198s 00:05:17.362 user 0m1.124s 00:05:17.362 sys 0m0.513s 00:05:17.362 19:35:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.362 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:17.362 ************************************ 00:05:17.362 END TEST default_locks_via_rpc 00:05:17.362 ************************************ 00:05:17.362 19:35:58 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:17.362 19:35:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.362 19:35:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.362 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:17.362 ************************************ 00:05:17.362 START TEST non_locking_app_on_locked_coremask 00:05:17.362 ************************************ 00:05:17.362 19:35:58 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:17.362 19:35:58 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1587871 00:05:17.362 19:35:58 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.362 19:35:58 -- event/cpu_locks.sh@81 -- # waitforlisten 1587871 /var/tmp/spdk.sock 00:05:17.362 19:35:58 -- common/autotest_common.sh@817 -- # '[' -z 1587871 ']' 00:05:17.362 19:35:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.362 19:35:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.362 19:35:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.362 19:35:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.362 19:35:58 -- common/autotest_common.sh@10 -- # set +x 00:05:17.620 [2024-04-24 19:35:58.893541] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:17.620 [2024-04-24 19:35:58.893651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587871 ] 00:05:17.620 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.620 [2024-04-24 19:35:58.955205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.620 [2024-04-24 19:35:59.075396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.878 19:35:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.878 19:35:59 -- common/autotest_common.sh@850 -- # return 0 00:05:17.878 19:35:59 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1587885 00:05:17.878 19:35:59 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:17.878 19:35:59 -- event/cpu_locks.sh@85 -- # waitforlisten 1587885 /var/tmp/spdk2.sock 00:05:17.878 19:35:59 -- common/autotest_common.sh@817 -- # '[' -z 1587885 ']' 00:05:17.878 19:35:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.878 19:35:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.878 19:35:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.878 19:35:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.878 19:35:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.878 [2024-04-24 19:35:59.380295] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:17.878 [2024-04-24 19:35:59.380370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587885 ] 00:05:18.136 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.136 [2024-04-24 19:35:59.477800] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.136 [2024-04-24 19:35:59.477833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.395 [2024-04-24 19:35:59.712207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.964 19:36:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:18.964 19:36:00 -- common/autotest_common.sh@850 -- # return 0 00:05:18.964 19:36:00 -- event/cpu_locks.sh@87 -- # locks_exist 1587871 00:05:18.964 19:36:00 -- event/cpu_locks.sh@22 -- # lslocks -p 1587871 00:05:18.964 19:36:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.531 lslocks: write error 00:05:19.531 19:36:00 -- event/cpu_locks.sh@89 -- # killprocess 1587871 00:05:19.531 19:36:00 -- common/autotest_common.sh@936 -- # '[' -z 1587871 ']' 00:05:19.531 19:36:00 -- common/autotest_common.sh@940 -- # kill -0 1587871 00:05:19.531 19:36:00 -- common/autotest_common.sh@941 -- # uname 00:05:19.531 19:36:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.531 19:36:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1587871 00:05:19.531 19:36:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.531 19:36:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.531 19:36:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1587871' 00:05:19.531 killing process with pid 1587871 00:05:19.531 19:36:00 -- common/autotest_common.sh@955 -- # kill 1587871 00:05:19.531 19:36:00 -- common/autotest_common.sh@960 -- # wait 1587871 00:05:20.465 19:36:01 -- event/cpu_locks.sh@90 -- # killprocess 1587885 00:05:20.465 19:36:01 -- common/autotest_common.sh@936 -- # '[' -z 1587885 ']' 00:05:20.465 19:36:01 -- common/autotest_common.sh@940 -- # kill -0 1587885 00:05:20.465 19:36:01 -- common/autotest_common.sh@941 -- # uname 00:05:20.465 19:36:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.465 19:36:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1587885 00:05:20.465 19:36:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.465 19:36:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.465 19:36:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1587885' 00:05:20.465 killing process with pid 1587885 00:05:20.465 19:36:01 -- common/autotest_common.sh@955 -- # kill 1587885 00:05:20.465 19:36:01 -- common/autotest_common.sh@960 -- # wait 1587885 00:05:20.723 00:05:20.723 real 0m3.293s 00:05:20.723 user 0m3.465s 00:05:20.723 sys 0m1.030s 00:05:20.723 19:36:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.723 19:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.723 ************************************ 00:05:20.723 END TEST non_locking_app_on_locked_coremask 00:05:20.723 ************************************ 00:05:20.723 19:36:02 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:20.723 19:36:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.723 19:36:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.723 19:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.981 ************************************ 00:05:20.981 START TEST locking_app_on_unlocked_coremask 00:05:20.981 ************************************ 00:05:20.981 19:36:02 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:20.981 19:36:02 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1588425 00:05:20.981 19:36:02 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:20.981 19:36:02 -- event/cpu_locks.sh@99 -- # waitforlisten 1588425 /var/tmp/spdk.sock 00:05:20.981 19:36:02 -- common/autotest_common.sh@817 -- # '[' -z 1588425 ']' 00:05:20.981 19:36:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.981 19:36:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:20.981 19:36:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.981 19:36:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:20.981 19:36:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.981 [2024-04-24 19:36:02.307641] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:20.981 [2024-04-24 19:36:02.307758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588425 ] 00:05:20.981 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.982 [2024-04-24 19:36:02.371155] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.982 [2024-04-24 19:36:02.371202] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.982 [2024-04-24 19:36:02.485087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.916 19:36:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:21.916 19:36:03 -- common/autotest_common.sh@850 -- # return 0 00:05:21.916 19:36:03 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1588563 00:05:21.916 19:36:03 -- event/cpu_locks.sh@103 -- # waitforlisten 1588563 /var/tmp/spdk2.sock 00:05:21.916 19:36:03 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.916 19:36:03 -- common/autotest_common.sh@817 -- # '[' -z 1588563 ']' 00:05:21.916 19:36:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.916 19:36:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:21.916 19:36:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.916 19:36:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:21.916 19:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:21.916 [2024-04-24 19:36:03.306533] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:21.916 [2024-04-24 19:36:03.306644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588563 ] 00:05:21.916 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.916 [2024-04-24 19:36:03.403742] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.173 [2024-04-24 19:36:03.643514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.739 19:36:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:22.739 19:36:04 -- common/autotest_common.sh@850 -- # return 0 00:05:22.739 19:36:04 -- event/cpu_locks.sh@105 -- # locks_exist 1588563 00:05:22.739 19:36:04 -- event/cpu_locks.sh@22 -- # lslocks -p 1588563 00:05:22.739 19:36:04 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.305 lslocks: write error 00:05:23.305 19:36:04 -- event/cpu_locks.sh@107 -- # killprocess 1588425 00:05:23.305 19:36:04 -- common/autotest_common.sh@936 -- # '[' -z 1588425 ']' 00:05:23.305 19:36:04 -- common/autotest_common.sh@940 -- # kill -0 1588425 00:05:23.305 19:36:04 -- common/autotest_common.sh@941 -- # uname 00:05:23.305 19:36:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:23.305 19:36:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1588425 00:05:23.305 19:36:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:23.305 19:36:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:23.305 19:36:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1588425' 00:05:23.305 killing process with pid 1588425 00:05:23.305 19:36:04 -- common/autotest_common.sh@955 -- # kill 1588425 00:05:23.305 19:36:04 -- common/autotest_common.sh@960 -- # wait 1588425 00:05:24.237 19:36:05 -- event/cpu_locks.sh@108 -- # killprocess 1588563 00:05:24.237 19:36:05 -- common/autotest_common.sh@936 -- # '[' -z 1588563 ']' 00:05:24.237 19:36:05 -- common/autotest_common.sh@940 -- # kill -0 1588563 00:05:24.237 19:36:05 -- common/autotest_common.sh@941 -- # uname 00:05:24.237 19:36:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.237 19:36:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1588563 00:05:24.237 19:36:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.237 19:36:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.237 19:36:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1588563' 00:05:24.237 killing process with pid 1588563 00:05:24.237 19:36:05 -- common/autotest_common.sh@955 -- # kill 1588563 00:05:24.237 19:36:05 -- common/autotest_common.sh@960 -- # wait 1588563 00:05:24.804 00:05:24.804 real 0m3.811s 00:05:24.804 user 0m4.147s 00:05:24.804 sys 0m1.086s 00:05:24.804 19:36:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.804 19:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:24.804 ************************************ 00:05:24.804 END TEST locking_app_on_unlocked_coremask 00:05:24.804 ************************************ 00:05:24.804 19:36:06 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:24.804 19:36:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.804 19:36:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.804 19:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:24.804 ************************************ 00:05:24.804 START TEST locking_app_on_locked_coremask 00:05:24.804 ************************************ 00:05:24.804 19:36:06 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:24.804 19:36:06 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1588881 00:05:24.804 19:36:06 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.804 19:36:06 -- event/cpu_locks.sh@116 -- # waitforlisten 1588881 /var/tmp/spdk.sock 00:05:24.804 19:36:06 -- common/autotest_common.sh@817 -- # '[' -z 1588881 ']' 00:05:24.804 19:36:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.804 19:36:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.804 19:36:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.804 19:36:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.804 19:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:24.804 [2024-04-24 19:36:06.224942] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:24.804 [2024-04-24 19:36:06.225045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588881 ] 00:05:24.804 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.804 [2024-04-24 19:36:06.289389] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.062 [2024-04-24 19:36:06.407530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.321 19:36:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.321 19:36:06 -- common/autotest_common.sh@850 -- # return 0 00:05:25.321 19:36:06 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1589005 00:05:25.321 19:36:06 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.321 19:36:06 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1589005 /var/tmp/spdk2.sock 00:05:25.321 19:36:06 -- common/autotest_common.sh@638 -- # local es=0 00:05:25.321 19:36:06 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1589005 /var/tmp/spdk2.sock 00:05:25.321 19:36:06 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:25.321 19:36:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:25.321 19:36:06 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:25.321 19:36:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:25.321 19:36:06 -- common/autotest_common.sh@641 -- # waitforlisten 1589005 /var/tmp/spdk2.sock 00:05:25.321 19:36:06 -- common/autotest_common.sh@817 -- # '[' -z 1589005 ']' 00:05:25.321 19:36:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.321 19:36:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.321 19:36:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.321 19:36:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.321 19:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:25.321 [2024-04-24 19:36:06.728988] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:25.321 [2024-04-24 19:36:06.729087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589005 ] 00:05:25.321 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.321 [2024-04-24 19:36:06.824536] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1588881 has claimed it. 00:05:25.321 [2024-04-24 19:36:06.824609] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1589005) - No such process 00:05:26.254 ERROR: process (pid: 1589005) is no longer running 00:05:26.254 19:36:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.254 19:36:07 -- common/autotest_common.sh@850 -- # return 1 00:05:26.254 19:36:07 -- common/autotest_common.sh@641 -- # es=1 00:05:26.254 19:36:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:26.254 19:36:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:26.254 19:36:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:26.254 19:36:07 -- event/cpu_locks.sh@122 -- # locks_exist 1588881 00:05:26.254 19:36:07 -- event/cpu_locks.sh@22 -- # lslocks -p 1588881 00:05:26.254 19:36:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.254 lslocks: write error 00:05:26.254 19:36:07 -- event/cpu_locks.sh@124 -- # killprocess 1588881 00:05:26.254 19:36:07 -- common/autotest_common.sh@936 -- # '[' -z 1588881 ']' 00:05:26.254 19:36:07 -- common/autotest_common.sh@940 -- # kill -0 1588881 00:05:26.254 19:36:07 -- common/autotest_common.sh@941 -- # uname 00:05:26.254 19:36:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.254 19:36:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1588881 00:05:26.254 19:36:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.254 19:36:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.254 19:36:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1588881' 00:05:26.254 killing process with pid 1588881 00:05:26.254 19:36:07 -- common/autotest_common.sh@955 -- # kill 1588881 00:05:26.254 19:36:07 -- common/autotest_common.sh@960 -- # wait 1588881 00:05:26.820 00:05:26.820 real 0m1.988s 00:05:26.820 user 0m2.125s 00:05:26.820 sys 0m0.639s 00:05:26.820 19:36:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.820 19:36:08 -- common/autotest_common.sh@10 -- # set +x 00:05:26.820 ************************************ 00:05:26.820 END TEST locking_app_on_locked_coremask 00:05:26.820 ************************************ 00:05:26.820 19:36:08 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:26.820 19:36:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.820 19:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.820 19:36:08 -- common/autotest_common.sh@10 -- # set +x 00:05:26.820 ************************************ 00:05:26.820 START TEST locking_overlapped_coremask 00:05:26.820 ************************************ 00:05:26.820 19:36:08 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:26.820 19:36:08 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1589541 00:05:26.820 19:36:08 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:26.820 19:36:08 -- event/cpu_locks.sh@133 -- # waitforlisten 1589541 /var/tmp/spdk.sock 00:05:26.820 19:36:08 -- common/autotest_common.sh@817 -- # '[' -z 1589541 ']' 00:05:26.820 19:36:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.820 19:36:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:26.820 19:36:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.820 19:36:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:26.820 19:36:08 -- common/autotest_common.sh@10 -- # set +x 00:05:27.079 [2024-04-24 19:36:08.339017] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:27.079 [2024-04-24 19:36:08.339123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589541 ] 00:05:27.079 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.079 [2024-04-24 19:36:08.403462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.079 [2024-04-24 19:36:08.517185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.079 [2024-04-24 19:36:08.517301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.079 [2024-04-24 19:36:08.517304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.014 19:36:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.014 19:36:09 -- common/autotest_common.sh@850 -- # return 0 00:05:28.014 19:36:09 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1589820 00:05:28.014 19:36:09 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:28.014 19:36:09 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1589820 /var/tmp/spdk2.sock 00:05:28.014 19:36:09 -- common/autotest_common.sh@638 -- # local es=0 00:05:28.014 19:36:09 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1589820 /var/tmp/spdk2.sock 00:05:28.014 19:36:09 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:28.014 19:36:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.014 19:36:09 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:28.014 19:36:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:28.014 19:36:09 -- common/autotest_common.sh@641 -- # waitforlisten 1589820 /var/tmp/spdk2.sock 00:05:28.014 19:36:09 -- common/autotest_common.sh@817 -- # '[' -z 1589820 ']' 00:05:28.014 19:36:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.014 19:36:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.014 19:36:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.014 19:36:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.014 19:36:09 -- common/autotest_common.sh@10 -- # set +x 00:05:28.014 [2024-04-24 19:36:09.321370] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:28.014 [2024-04-24 19:36:09.321471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589820 ] 00:05:28.014 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.014 [2024-04-24 19:36:09.411451] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1589541 has claimed it. 00:05:28.014 [2024-04-24 19:36:09.411519] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1589820) - No such process 00:05:28.580 ERROR: process (pid: 1589820) is no longer running 00:05:28.580 19:36:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.580 19:36:10 -- common/autotest_common.sh@850 -- # return 1 00:05:28.580 19:36:10 -- common/autotest_common.sh@641 -- # es=1 00:05:28.580 19:36:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:28.580 19:36:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:28.580 19:36:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:28.580 19:36:10 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:28.580 19:36:10 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:28.580 19:36:10 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:28.580 19:36:10 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:28.580 19:36:10 -- event/cpu_locks.sh@141 -- # killprocess 1589541 00:05:28.580 19:36:10 -- common/autotest_common.sh@936 -- # '[' -z 1589541 ']' 00:05:28.580 19:36:10 -- common/autotest_common.sh@940 -- # kill -0 1589541 00:05:28.580 19:36:10 -- common/autotest_common.sh@941 -- # uname 00:05:28.580 19:36:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.580 19:36:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1589541 00:05:28.580 19:36:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.580 19:36:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.580 19:36:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1589541' 00:05:28.580 killing process with pid 1589541 00:05:28.580 19:36:10 -- common/autotest_common.sh@955 -- # kill 1589541 00:05:28.580 19:36:10 -- common/autotest_common.sh@960 -- # wait 1589541 00:05:29.146 00:05:29.146 real 0m2.219s 00:05:29.146 user 0m6.233s 00:05:29.146 sys 0m0.462s 00:05:29.146 19:36:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.146 19:36:10 -- common/autotest_common.sh@10 -- # set +x 00:05:29.146 ************************************ 00:05:29.146 END TEST locking_overlapped_coremask 00:05:29.146 ************************************ 00:05:29.146 19:36:10 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:29.146 19:36:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.146 19:36:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.146 19:36:10 -- common/autotest_common.sh@10 -- # set +x 00:05:29.146 ************************************ 00:05:29.146 START TEST locking_overlapped_coremask_via_rpc 00:05:29.146 ************************************ 00:05:29.146 19:36:10 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:29.146 19:36:10 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1589995 00:05:29.146 19:36:10 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:29.146 19:36:10 -- event/cpu_locks.sh@149 -- # waitforlisten 1589995 /var/tmp/spdk.sock 00:05:29.146 19:36:10 -- common/autotest_common.sh@817 -- # '[' -z 1589995 ']' 00:05:29.146 19:36:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.146 19:36:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.146 19:36:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.146 19:36:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.146 19:36:10 -- common/autotest_common.sh@10 -- # set +x 00:05:29.405 [2024-04-24 19:36:10.683959] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:29.405 [2024-04-24 19:36:10.684058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589995 ] 00:05:29.405 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.405 [2024-04-24 19:36:10.741776] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.405 [2024-04-24 19:36:10.741817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.405 [2024-04-24 19:36:10.857363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.405 [2024-04-24 19:36:10.857437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.405 [2024-04-24 19:36:10.857440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.339 19:36:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.339 19:36:11 -- common/autotest_common.sh@850 -- # return 0 00:05:30.339 19:36:11 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1590134 00:05:30.339 19:36:11 -- event/cpu_locks.sh@153 -- # waitforlisten 1590134 /var/tmp/spdk2.sock 00:05:30.339 19:36:11 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:30.339 19:36:11 -- common/autotest_common.sh@817 -- # '[' -z 1590134 ']' 00:05:30.339 19:36:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.339 19:36:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.339 19:36:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.339 19:36:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.339 19:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:30.339 [2024-04-24 19:36:11.658438] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:30.339 [2024-04-24 19:36:11.658532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590134 ] 00:05:30.339 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.339 [2024-04-24 19:36:11.747425] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.339 [2024-04-24 19:36:11.747479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.597 [2024-04-24 19:36:11.966132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.597 [2024-04-24 19:36:11.969689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:30.597 [2024-04-24 19:36:11.969692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.162 19:36:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.162 19:36:12 -- common/autotest_common.sh@850 -- # return 0 00:05:31.162 19:36:12 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.162 19:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.162 19:36:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 19:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.162 19:36:12 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.162 19:36:12 -- common/autotest_common.sh@638 -- # local es=0 00:05:31.162 19:36:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.162 19:36:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:31.162 19:36:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:31.162 19:36:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:31.162 19:36:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:31.162 19:36:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.162 19:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.162 19:36:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.162 [2024-04-24 19:36:12.605733] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1589995 has claimed it. 00:05:31.162 request: 00:05:31.162 { 00:05:31.162 "method": "framework_enable_cpumask_locks", 00:05:31.162 "req_id": 1 00:05:31.162 } 00:05:31.162 Got JSON-RPC error response 00:05:31.162 response: 00:05:31.162 { 00:05:31.162 "code": -32603, 00:05:31.162 "message": "Failed to claim CPU core: 2" 00:05:31.162 } 00:05:31.162 19:36:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:31.162 19:36:12 -- common/autotest_common.sh@641 -- # es=1 00:05:31.162 19:36:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:31.162 19:36:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:31.162 19:36:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:31.162 19:36:12 -- event/cpu_locks.sh@158 -- # waitforlisten 1589995 /var/tmp/spdk.sock 00:05:31.162 19:36:12 -- common/autotest_common.sh@817 -- # '[' -z 1589995 ']' 00:05:31.162 19:36:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.162 19:36:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.162 19:36:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.162 19:36:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.162 19:36:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.425 19:36:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.425 19:36:12 -- common/autotest_common.sh@850 -- # return 0 00:05:31.425 19:36:12 -- event/cpu_locks.sh@159 -- # waitforlisten 1590134 /var/tmp/spdk2.sock 00:05:31.425 19:36:12 -- common/autotest_common.sh@817 -- # '[' -z 1590134 ']' 00:05:31.425 19:36:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.425 19:36:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.425 19:36:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.425 19:36:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.425 19:36:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.705 19:36:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.705 19:36:13 -- common/autotest_common.sh@850 -- # return 0 00:05:31.705 19:36:13 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:31.705 19:36:13 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.705 19:36:13 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.705 19:36:13 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.705 00:05:31.705 real 0m2.472s 00:05:31.705 user 0m1.199s 00:05:31.705 sys 0m0.198s 00:05:31.705 19:36:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.705 19:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:31.705 ************************************ 00:05:31.705 END TEST locking_overlapped_coremask_via_rpc 00:05:31.705 ************************************ 00:05:31.705 19:36:13 -- event/cpu_locks.sh@174 -- # cleanup 00:05:31.705 19:36:13 -- event/cpu_locks.sh@15 -- # [[ -z 1589995 ]] 00:05:31.705 19:36:13 -- event/cpu_locks.sh@15 -- # killprocess 1589995 00:05:31.705 19:36:13 -- common/autotest_common.sh@936 -- # '[' -z 1589995 ']' 00:05:31.705 19:36:13 -- common/autotest_common.sh@940 -- # kill -0 1589995 00:05:31.705 19:36:13 -- common/autotest_common.sh@941 -- # uname 00:05:31.705 19:36:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.705 19:36:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1589995 00:05:31.706 19:36:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:31.706 19:36:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:31.706 19:36:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1589995' 00:05:31.706 killing process with pid 1589995 00:05:31.706 19:36:13 -- common/autotest_common.sh@955 -- # kill 1589995 00:05:31.706 19:36:13 -- common/autotest_common.sh@960 -- # wait 1589995 00:05:32.270 19:36:13 -- event/cpu_locks.sh@16 -- # [[ -z 1590134 ]] 00:05:32.270 19:36:13 -- event/cpu_locks.sh@16 -- # killprocess 1590134 00:05:32.270 19:36:13 -- common/autotest_common.sh@936 -- # '[' -z 1590134 ']' 00:05:32.270 19:36:13 -- common/autotest_common.sh@940 -- # kill -0 1590134 00:05:32.270 19:36:13 -- common/autotest_common.sh@941 -- # uname 00:05:32.270 19:36:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.270 19:36:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1590134 00:05:32.270 19:36:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:32.270 19:36:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:32.270 19:36:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1590134' 00:05:32.270 killing process with pid 1590134 00:05:32.270 19:36:13 -- common/autotest_common.sh@955 -- # kill 1590134 00:05:32.270 19:36:13 -- common/autotest_common.sh@960 -- # wait 1590134 00:05:32.836 19:36:14 -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.836 19:36:14 -- event/cpu_locks.sh@1 -- # cleanup 00:05:32.836 19:36:14 -- event/cpu_locks.sh@15 -- # [[ -z 1589995 ]] 00:05:32.836 19:36:14 -- event/cpu_locks.sh@15 -- # killprocess 1589995 00:05:32.836 19:36:14 -- common/autotest_common.sh@936 -- # '[' -z 1589995 ']' 00:05:32.836 19:36:14 -- common/autotest_common.sh@940 -- # kill -0 1589995 00:05:32.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1589995) - No such process 00:05:32.836 19:36:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1589995 is not found' 00:05:32.836 Process with pid 1589995 is not found 00:05:32.836 19:36:14 -- event/cpu_locks.sh@16 -- # [[ -z 1590134 ]] 00:05:32.836 19:36:14 -- event/cpu_locks.sh@16 -- # killprocess 1590134 00:05:32.836 19:36:14 -- common/autotest_common.sh@936 -- # '[' -z 1590134 ']' 00:05:32.836 19:36:14 -- common/autotest_common.sh@940 -- # kill -0 1590134 00:05:32.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1590134) - No such process 00:05:32.836 19:36:14 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1590134 is not found' 00:05:32.836 Process with pid 1590134 is not found 00:05:32.836 19:36:14 -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.836 00:05:32.836 real 0m18.132s 00:05:32.836 user 0m32.079s 00:05:32.836 sys 0m5.588s 00:05:32.836 19:36:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.836 19:36:14 -- common/autotest_common.sh@10 -- # set +x 00:05:32.836 ************************************ 00:05:32.836 END TEST cpu_locks 00:05:32.836 ************************************ 00:05:32.836 00:05:32.836 real 0m44.711s 00:05:32.836 user 1m23.633s 00:05:32.836 sys 0m9.840s 00:05:32.836 19:36:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:32.836 19:36:14 -- common/autotest_common.sh@10 -- # set +x 00:05:32.836 ************************************ 00:05:32.836 END TEST event 00:05:32.836 ************************************ 00:05:32.836 19:36:14 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:32.836 19:36:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.836 19:36:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.836 19:36:14 -- common/autotest_common.sh@10 -- # set +x 00:05:32.836 ************************************ 00:05:32.836 START TEST thread 00:05:32.836 ************************************ 00:05:32.836 19:36:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:32.836 * Looking for test storage... 00:05:32.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:32.836 19:36:14 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.836 19:36:14 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:32.836 19:36:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.836 19:36:14 -- common/autotest_common.sh@10 -- # set +x 00:05:33.094 ************************************ 00:05:33.094 START TEST thread_poller_perf 00:05:33.094 ************************************ 00:05:33.094 19:36:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.094 [2024-04-24 19:36:14.392010] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:33.094 [2024-04-24 19:36:14.392083] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590641 ] 00:05:33.094 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.094 [2024-04-24 19:36:14.448662] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.094 [2024-04-24 19:36:14.562221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.094 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:34.466 ====================================== 00:05:34.466 busy:2709661717 (cyc) 00:05:34.466 total_run_count: 292000 00:05:34.466 tsc_hz: 2700000000 (cyc) 00:05:34.466 ====================================== 00:05:34.466 poller_cost: 9279 (cyc), 3436 (nsec) 00:05:34.466 00:05:34.466 real 0m1.313s 00:05:34.466 user 0m1.231s 00:05:34.466 sys 0m0.077s 00:05:34.466 19:36:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.466 19:36:15 -- common/autotest_common.sh@10 -- # set +x 00:05:34.466 ************************************ 00:05:34.466 END TEST thread_poller_perf 00:05:34.466 ************************************ 00:05:34.466 19:36:15 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.466 19:36:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:34.466 19:36:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.466 19:36:15 -- common/autotest_common.sh@10 -- # set +x 00:05:34.466 ************************************ 00:05:34.466 START TEST thread_poller_perf 00:05:34.466 ************************************ 00:05:34.466 19:36:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.466 [2024-04-24 19:36:15.831321] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:34.466 [2024-04-24 19:36:15.831382] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590800 ] 00:05:34.466 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.466 [2024-04-24 19:36:15.896136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.723 [2024-04-24 19:36:16.012928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.723 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:35.656 ====================================== 00:05:35.656 busy:2702598688 (cyc) 00:05:35.656 total_run_count: 3893000 00:05:35.656 tsc_hz: 2700000000 (cyc) 00:05:35.656 ====================================== 00:05:35.656 poller_cost: 694 (cyc), 257 (nsec) 00:05:35.656 00:05:35.656 real 0m1.321s 00:05:35.656 user 0m1.231s 00:05:35.656 sys 0m0.084s 00:05:35.656 19:36:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.656 19:36:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.656 ************************************ 00:05:35.656 END TEST thread_poller_perf 00:05:35.656 ************************************ 00:05:35.656 19:36:17 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:35.656 00:05:35.656 real 0m2.928s 00:05:35.656 user 0m2.584s 00:05:35.656 sys 0m0.319s 00:05:35.656 19:36:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.656 19:36:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.656 ************************************ 00:05:35.656 END TEST thread 00:05:35.656 ************************************ 00:05:35.913 19:36:17 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:35.913 19:36:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.913 19:36:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.913 19:36:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.913 ************************************ 00:05:35.913 START TEST accel 00:05:35.913 ************************************ 00:05:35.913 19:36:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:35.913 * Looking for test storage... 00:05:35.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:35.913 19:36:17 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:35.913 19:36:17 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:35.913 19:36:17 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.913 19:36:17 -- accel/accel.sh@62 -- # spdk_tgt_pid=1591009 00:05:35.913 19:36:17 -- accel/accel.sh@63 -- # waitforlisten 1591009 00:05:35.913 19:36:17 -- common/autotest_common.sh@817 -- # '[' -z 1591009 ']' 00:05:35.914 19:36:17 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:35.914 19:36:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.914 19:36:17 -- accel/accel.sh@61 -- # build_accel_config 00:05:35.914 19:36:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:35.914 19:36:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.914 19:36:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.914 19:36:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:35.914 19:36:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.914 19:36:17 -- common/autotest_common.sh@10 -- # set +x 00:05:35.914 19:36:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.914 19:36:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.914 19:36:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.914 19:36:17 -- accel/accel.sh@40 -- # local IFS=, 00:05:35.914 19:36:17 -- accel/accel.sh@41 -- # jq -r . 00:05:35.914 [2024-04-24 19:36:17.385875] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:35.914 [2024-04-24 19:36:17.385985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591009 ] 00:05:35.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.172 [2024-04-24 19:36:17.444396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.172 [2024-04-24 19:36:17.556055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.429 19:36:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.429 19:36:17 -- common/autotest_common.sh@850 -- # return 0 00:05:36.429 19:36:17 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:36.429 19:36:17 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:36.429 19:36:17 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:36.429 19:36:17 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:36.429 19:36:17 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:36.429 19:36:17 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:36.429 19:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.429 19:36:17 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:36.429 19:36:17 -- common/autotest_common.sh@10 -- # set +x 00:05:36.429 19:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.429 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.429 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.429 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.429 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.429 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.429 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.429 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # IFS== 00:05:36.430 19:36:17 -- accel/accel.sh@72 -- # read -r opc module 00:05:36.430 19:36:17 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.430 19:36:17 -- accel/accel.sh@75 -- # killprocess 1591009 00:05:36.430 19:36:17 -- common/autotest_common.sh@936 -- # '[' -z 1591009 ']' 00:05:36.430 19:36:17 -- common/autotest_common.sh@940 -- # kill -0 1591009 00:05:36.430 19:36:17 -- common/autotest_common.sh@941 -- # uname 00:05:36.430 19:36:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.430 19:36:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1591009 00:05:36.430 19:36:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.430 19:36:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.430 19:36:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1591009' 00:05:36.430 killing process with pid 1591009 00:05:36.430 19:36:17 -- common/autotest_common.sh@955 -- # kill 1591009 00:05:36.430 19:36:17 -- common/autotest_common.sh@960 -- # wait 1591009 00:05:36.994 19:36:18 -- accel/accel.sh@76 -- # trap - ERR 00:05:36.994 19:36:18 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:36.994 19:36:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:36.994 19:36:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.994 19:36:18 -- common/autotest_common.sh@10 -- # set +x 00:05:36.994 19:36:18 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:36.994 19:36:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:36.994 19:36:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.994 19:36:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.994 19:36:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.994 19:36:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.994 19:36:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.994 19:36:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.994 19:36:18 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.994 19:36:18 -- accel/accel.sh@41 -- # jq -r . 00:05:36.994 19:36:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.994 19:36:18 -- common/autotest_common.sh@10 -- # set +x 00:05:36.994 19:36:18 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:36.994 19:36:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:36.994 19:36:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.994 19:36:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.252 ************************************ 00:05:37.252 START TEST accel_missing_filename 00:05:37.252 ************************************ 00:05:37.252 19:36:18 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:37.252 19:36:18 -- common/autotest_common.sh@638 -- # local es=0 00:05:37.253 19:36:18 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:37.253 19:36:18 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:37.253 19:36:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.253 19:36:18 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:37.253 19:36:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.253 19:36:18 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:37.253 19:36:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:37.253 19:36:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.253 19:36:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.253 19:36:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.253 19:36:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.253 19:36:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.253 19:36:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.253 19:36:18 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.253 19:36:18 -- accel/accel.sh@41 -- # jq -r . 00:05:37.253 [2024-04-24 19:36:18.614329] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:37.253 [2024-04-24 19:36:18.614393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591189 ] 00:05:37.253 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.253 [2024-04-24 19:36:18.675381] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.511 [2024-04-24 19:36:18.793402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.511 [2024-04-24 19:36:18.852556] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:37.511 [2024-04-24 19:36:18.928430] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:37.769 A filename is required. 00:05:37.769 19:36:19 -- common/autotest_common.sh@641 -- # es=234 00:05:37.769 19:36:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:37.769 19:36:19 -- common/autotest_common.sh@650 -- # es=106 00:05:37.769 19:36:19 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:37.769 19:36:19 -- common/autotest_common.sh@658 -- # es=1 00:05:37.769 19:36:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:37.769 00:05:37.769 real 0m0.458s 00:05:37.769 user 0m0.355s 00:05:37.769 sys 0m0.136s 00:05:37.769 19:36:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.769 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:05:37.769 ************************************ 00:05:37.769 END TEST accel_missing_filename 00:05:37.769 ************************************ 00:05:37.769 19:36:19 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.769 19:36:19 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:37.769 19:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.769 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:05:37.769 ************************************ 00:05:37.769 START TEST accel_compress_verify 00:05:37.769 ************************************ 00:05:37.769 19:36:19 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.769 19:36:19 -- common/autotest_common.sh@638 -- # local es=0 00:05:37.769 19:36:19 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.769 19:36:19 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:37.769 19:36:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.769 19:36:19 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:37.769 19:36:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.769 19:36:19 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.769 19:36:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.769 19:36:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.769 19:36:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.769 19:36:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.769 19:36:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.769 19:36:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.769 19:36:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.769 19:36:19 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.769 19:36:19 -- accel/accel.sh@41 -- # jq -r . 00:05:37.769 [2024-04-24 19:36:19.187090] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:37.769 [2024-04-24 19:36:19.187142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591347 ] 00:05:37.769 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.769 [2024-04-24 19:36:19.248664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.027 [2024-04-24 19:36:19.368975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.027 [2024-04-24 19:36:19.430691] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.027 [2024-04-24 19:36:19.519314] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:38.284 00:05:38.284 Compression does not support the verify option, aborting. 00:05:38.284 19:36:19 -- common/autotest_common.sh@641 -- # es=161 00:05:38.284 19:36:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:38.284 19:36:19 -- common/autotest_common.sh@650 -- # es=33 00:05:38.284 19:36:19 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:38.284 19:36:19 -- common/autotest_common.sh@658 -- # es=1 00:05:38.284 19:36:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:38.284 00:05:38.284 real 0m0.473s 00:05:38.284 user 0m0.356s 00:05:38.284 sys 0m0.147s 00:05:38.284 19:36:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.284 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:05:38.284 ************************************ 00:05:38.284 END TEST accel_compress_verify 00:05:38.284 ************************************ 00:05:38.284 19:36:19 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:38.284 19:36:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:38.284 19:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.284 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:05:38.284 ************************************ 00:05:38.284 START TEST accel_wrong_workload 00:05:38.284 ************************************ 00:05:38.284 19:36:19 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:38.284 19:36:19 -- common/autotest_common.sh@638 -- # local es=0 00:05:38.284 19:36:19 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:38.284 19:36:19 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:38.284 19:36:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.284 19:36:19 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:38.284 19:36:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.284 19:36:19 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:38.284 19:36:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:38.284 19:36:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.284 19:36:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.284 19:36:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.284 19:36:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.284 19:36:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.284 19:36:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.284 19:36:19 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.284 19:36:19 -- accel/accel.sh@41 -- # jq -r . 00:05:38.284 Unsupported workload type: foobar 00:05:38.284 [2024-04-24 19:36:19.779160] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:38.284 accel_perf options: 00:05:38.284 [-h help message] 00:05:38.284 [-q queue depth per core] 00:05:38.284 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:38.284 [-T number of threads per core 00:05:38.284 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:38.284 [-t time in seconds] 00:05:38.284 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:38.284 [ dif_verify, , dif_generate, dif_generate_copy 00:05:38.284 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:38.284 [-l for compress/decompress workloads, name of uncompressed input file 00:05:38.284 [-S for crc32c workload, use this seed value (default 0) 00:05:38.284 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:38.284 [-f for fill workload, use this BYTE value (default 255) 00:05:38.284 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:38.284 [-y verify result if this switch is on] 00:05:38.284 [-a tasks to allocate per core (default: same value as -q)] 00:05:38.284 Can be used to spread operations across a wider range of memory. 00:05:38.284 19:36:19 -- common/autotest_common.sh@641 -- # es=1 00:05:38.284 19:36:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:38.284 19:36:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:38.284 19:36:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:38.284 00:05:38.284 real 0m0.023s 00:05:38.284 user 0m0.010s 00:05:38.284 sys 0m0.013s 00:05:38.284 19:36:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.284 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:05:38.284 ************************************ 00:05:38.284 END TEST accel_wrong_workload 00:05:38.284 ************************************ 00:05:38.543 Error: writing output failed: Broken pipe 00:05:38.543 19:36:19 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:38.543 19:36:19 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:38.543 19:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.543 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:05:38.543 ************************************ 00:05:38.543 START TEST accel_negative_buffers 00:05:38.543 ************************************ 00:05:38.543 19:36:19 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:38.543 19:36:19 -- common/autotest_common.sh@638 -- # local es=0 00:05:38.543 19:36:19 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:38.543 19:36:19 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:38.543 19:36:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.543 19:36:19 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:38.543 19:36:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:38.543 19:36:19 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:38.543 19:36:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:38.543 19:36:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.543 19:36:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.543 19:36:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.543 19:36:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.543 19:36:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.543 19:36:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.543 19:36:19 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.543 19:36:19 -- accel/accel.sh@41 -- # jq -r . 00:05:38.543 -x option must be non-negative. 00:05:38.543 [2024-04-24 19:36:19.914086] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:38.543 accel_perf options: 00:05:38.543 [-h help message] 00:05:38.543 [-q queue depth per core] 00:05:38.543 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:38.543 [-T number of threads per core 00:05:38.543 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:38.543 [-t time in seconds] 00:05:38.543 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:38.543 [ dif_verify, , dif_generate, dif_generate_copy 00:05:38.543 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:38.543 [-l for compress/decompress workloads, name of uncompressed input file 00:05:38.543 [-S for crc32c workload, use this seed value (default 0) 00:05:38.543 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:38.543 [-f for fill workload, use this BYTE value (default 255) 00:05:38.543 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:38.543 [-y verify result if this switch is on] 00:05:38.543 [-a tasks to allocate per core (default: same value as -q)] 00:05:38.543 Can be used to spread operations across a wider range of memory. 00:05:38.543 19:36:19 -- common/autotest_common.sh@641 -- # es=1 00:05:38.543 19:36:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:38.543 19:36:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:38.543 19:36:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:38.543 00:05:38.543 real 0m0.023s 00:05:38.543 user 0m0.010s 00:05:38.543 sys 0m0.014s 00:05:38.543 19:36:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.543 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:05:38.543 ************************************ 00:05:38.543 END TEST accel_negative_buffers 00:05:38.543 ************************************ 00:05:38.543 Error: writing output failed: Broken pipe 00:05:38.543 19:36:19 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:38.543 19:36:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:38.543 19:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.543 19:36:19 -- common/autotest_common.sh@10 -- # set +x 00:05:38.543 ************************************ 00:05:38.543 START TEST accel_crc32c 00:05:38.543 ************************************ 00:05:38.543 19:36:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:38.543 19:36:20 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.543 19:36:20 -- accel/accel.sh@17 -- # local accel_module 00:05:38.543 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.543 19:36:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:38.543 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.543 19:36:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:38.543 19:36:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.543 19:36:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.543 19:36:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.543 19:36:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.543 19:36:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.543 19:36:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.543 19:36:20 -- accel/accel.sh@40 -- # local IFS=, 00:05:38.543 19:36:20 -- accel/accel.sh@41 -- # jq -r . 00:05:38.801 [2024-04-24 19:36:20.060609] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:38.801 [2024-04-24 19:36:20.060707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591555 ] 00:05:38.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.801 [2024-04-24 19:36:20.122969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.801 [2024-04-24 19:36:20.243817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val= 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val= 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val=0x1 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val= 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val= 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val=crc32c 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val=32 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val= 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val=software 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@22 -- # accel_module=software 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val=32 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val=32 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val=1 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val=Yes 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val= 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:38.801 19:36:20 -- accel/accel.sh@20 -- # val= 00:05:38.801 19:36:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # IFS=: 00:05:38.801 19:36:20 -- accel/accel.sh@19 -- # read -r var val 00:05:40.173 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.173 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.173 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.173 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.173 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.173 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.173 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.173 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.173 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.173 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.173 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.173 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.173 19:36:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.173 19:36:21 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:40.173 19:36:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.173 00:05:40.173 real 0m1.482s 00:05:40.173 user 0m1.337s 00:05:40.173 sys 0m0.148s 00:05:40.173 19:36:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.173 19:36:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.173 ************************************ 00:05:40.173 END TEST accel_crc32c 00:05:40.173 ************************************ 00:05:40.173 19:36:21 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:40.173 19:36:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:40.173 19:36:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.173 19:36:21 -- common/autotest_common.sh@10 -- # set +x 00:05:40.173 ************************************ 00:05:40.173 START TEST accel_crc32c_C2 00:05:40.173 ************************************ 00:05:40.173 19:36:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:40.173 19:36:21 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.173 19:36:21 -- accel/accel.sh@17 -- # local accel_module 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.173 19:36:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:40.173 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.173 19:36:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:40.173 19:36:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.173 19:36:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.173 19:36:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.173 19:36:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.173 19:36:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.173 19:36:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.173 19:36:21 -- accel/accel.sh@40 -- # local IFS=, 00:05:40.173 19:36:21 -- accel/accel.sh@41 -- # jq -r . 00:05:40.173 [2024-04-24 19:36:21.663216] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:40.173 [2024-04-24 19:36:21.663282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591717 ] 00:05:40.432 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.432 [2024-04-24 19:36:21.727274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.432 [2024-04-24 19:36:21.847187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val=0x1 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val=crc32c 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val=0 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val=software 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@22 -- # accel_module=software 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val=32 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val=32 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val=1 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val=Yes 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.432 19:36:21 -- accel/accel.sh@20 -- # val= 00:05:40.432 19:36:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # IFS=: 00:05:40.432 19:36:21 -- accel/accel.sh@19 -- # read -r var val 00:05:41.806 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:41.806 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:41.806 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:41.806 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:41.806 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:41.806 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:41.806 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:41.806 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:41.806 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:41.806 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:41.806 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:41.806 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:41.806 19:36:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.806 19:36:23 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:41.806 19:36:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.806 00:05:41.806 real 0m1.475s 00:05:41.806 user 0m1.330s 00:05:41.806 sys 0m0.147s 00:05:41.806 19:36:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.806 19:36:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.806 ************************************ 00:05:41.806 END TEST accel_crc32c_C2 00:05:41.806 ************************************ 00:05:41.806 19:36:23 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:41.806 19:36:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:41.806 19:36:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.806 19:36:23 -- common/autotest_common.sh@10 -- # set +x 00:05:41.806 ************************************ 00:05:41.806 START TEST accel_copy 00:05:41.806 ************************************ 00:05:41.806 19:36:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:41.806 19:36:23 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.806 19:36:23 -- accel/accel.sh@17 -- # local accel_module 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:41.806 19:36:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:41.806 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:41.806 19:36:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:41.806 19:36:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.806 19:36:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.806 19:36:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.806 19:36:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.806 19:36:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.806 19:36:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.806 19:36:23 -- accel/accel.sh@40 -- # local IFS=, 00:05:41.806 19:36:23 -- accel/accel.sh@41 -- # jq -r . 00:05:41.806 [2024-04-24 19:36:23.258238] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:41.806 [2024-04-24 19:36:23.258290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591887 ] 00:05:41.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.806 [2024-04-24 19:36:23.318683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.063 [2024-04-24 19:36:23.438029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.063 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:42.063 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.063 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.063 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.063 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:42.063 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.063 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.063 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.063 19:36:23 -- accel/accel.sh@20 -- # val=0x1 00:05:42.063 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.063 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.063 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.063 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:42.063 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.063 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val=copy 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val=software 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val=32 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val=32 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val=1 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val=Yes 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.064 19:36:23 -- accel/accel.sh@20 -- # val= 00:05:42.064 19:36:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.064 19:36:23 -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 19:36:24 -- accel/accel.sh@20 -- # val= 00:05:43.490 19:36:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 19:36:24 -- accel/accel.sh@20 -- # val= 00:05:43.490 19:36:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 19:36:24 -- accel/accel.sh@20 -- # val= 00:05:43.490 19:36:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 19:36:24 -- accel/accel.sh@20 -- # val= 00:05:43.490 19:36:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 19:36:24 -- accel/accel.sh@20 -- # val= 00:05:43.490 19:36:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 19:36:24 -- accel/accel.sh@20 -- # val= 00:05:43.490 19:36:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 19:36:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.490 19:36:24 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:43.490 19:36:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.490 00:05:43.490 real 0m1.476s 00:05:43.490 user 0m1.330s 00:05:43.490 sys 0m0.147s 00:05:43.490 19:36:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.490 19:36:24 -- common/autotest_common.sh@10 -- # set +x 00:05:43.490 ************************************ 00:05:43.490 END TEST accel_copy 00:05:43.490 ************************************ 00:05:43.490 19:36:24 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.490 19:36:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:43.490 19:36:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.490 19:36:24 -- common/autotest_common.sh@10 -- # set +x 00:05:43.490 ************************************ 00:05:43.490 START TEST accel_fill 00:05:43.490 ************************************ 00:05:43.490 19:36:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.490 19:36:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.490 19:36:24 -- accel/accel.sh@17 -- # local accel_module 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 19:36:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.490 19:36:24 -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 19:36:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.490 19:36:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.490 19:36:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.490 19:36:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.490 19:36:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.490 19:36:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.490 19:36:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.490 19:36:24 -- accel/accel.sh@40 -- # local IFS=, 00:05:43.490 19:36:24 -- accel/accel.sh@41 -- # jq -r . 00:05:43.490 [2024-04-24 19:36:24.849950] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:43.490 [2024-04-24 19:36:24.850017] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592164 ] 00:05:43.490 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.490 [2024-04-24 19:36:24.910589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.749 [2024-04-24 19:36:25.030465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val= 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val= 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val=0x1 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val= 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val= 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val=fill 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@23 -- # accel_opc=fill 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val=0x80 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val= 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val=software 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@22 -- # accel_module=software 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val=64 00:05:43.749 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 19:36:25 -- accel/accel.sh@20 -- # val=64 00:05:43.750 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.750 19:36:25 -- accel/accel.sh@20 -- # val=1 00:05:43.750 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.750 19:36:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.750 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.750 19:36:25 -- accel/accel.sh@20 -- # val=Yes 00:05:43.750 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.750 19:36:25 -- accel/accel.sh@20 -- # val= 00:05:43.750 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:43.750 19:36:25 -- accel/accel.sh@20 -- # val= 00:05:43.750 19:36:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 19:36:25 -- accel/accel.sh@19 -- # read -r var val 00:05:45.133 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.133 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.133 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.133 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.133 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.133 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.133 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.133 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.133 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.133 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.133 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.133 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.133 19:36:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.133 19:36:26 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:45.133 19:36:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.133 00:05:45.133 real 0m1.478s 00:05:45.133 user 0m1.327s 00:05:45.133 sys 0m0.153s 00:05:45.133 19:36:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.133 19:36:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.133 ************************************ 00:05:45.133 END TEST accel_fill 00:05:45.133 ************************************ 00:05:45.133 19:36:26 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:45.133 19:36:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:45.133 19:36:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.133 19:36:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.133 ************************************ 00:05:45.133 START TEST accel_copy_crc32c 00:05:45.133 ************************************ 00:05:45.133 19:36:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:05:45.133 19:36:26 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.133 19:36:26 -- accel/accel.sh@17 -- # local accel_module 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.133 19:36:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:45.133 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.133 19:36:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:45.133 19:36:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.133 19:36:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.133 19:36:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.133 19:36:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.133 19:36:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.133 19:36:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.133 19:36:26 -- accel/accel.sh@40 -- # local IFS=, 00:05:45.133 19:36:26 -- accel/accel.sh@41 -- # jq -r . 00:05:45.133 [2024-04-24 19:36:26.451063] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:45.133 [2024-04-24 19:36:26.451129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592329 ] 00:05:45.133 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.133 [2024-04-24 19:36:26.514343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.133 [2024-04-24 19:36:26.636224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val=0x1 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val=0 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val=software 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@22 -- # accel_module=software 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val=32 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val=32 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val=1 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val=Yes 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.391 19:36:26 -- accel/accel.sh@20 -- # val= 00:05:45.391 19:36:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.391 19:36:26 -- accel/accel.sh@19 -- # read -r var val 00:05:46.763 19:36:27 -- accel/accel.sh@20 -- # val= 00:05:46.763 19:36:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.763 19:36:27 -- accel/accel.sh@19 -- # IFS=: 00:05:46.763 19:36:27 -- accel/accel.sh@19 -- # read -r var val 00:05:46.763 19:36:27 -- accel/accel.sh@20 -- # val= 00:05:46.763 19:36:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.763 19:36:27 -- accel/accel.sh@19 -- # IFS=: 00:05:46.763 19:36:27 -- accel/accel.sh@19 -- # read -r var val 00:05:46.763 19:36:27 -- accel/accel.sh@20 -- # val= 00:05:46.763 19:36:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.763 19:36:27 -- accel/accel.sh@19 -- # IFS=: 00:05:46.763 19:36:27 -- accel/accel.sh@19 -- # read -r var val 00:05:46.763 19:36:27 -- accel/accel.sh@20 -- # val= 00:05:46.763 19:36:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.763 19:36:27 -- accel/accel.sh@19 -- # IFS=: 00:05:46.763 19:36:27 -- accel/accel.sh@19 -- # read -r var val 00:05:46.763 19:36:27 -- accel/accel.sh@20 -- # val= 00:05:46.764 19:36:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.764 19:36:27 -- accel/accel.sh@19 -- # IFS=: 00:05:46.764 19:36:27 -- accel/accel.sh@19 -- # read -r var val 00:05:46.764 19:36:27 -- accel/accel.sh@20 -- # val= 00:05:46.764 19:36:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.764 19:36:27 -- accel/accel.sh@19 -- # IFS=: 00:05:46.764 19:36:27 -- accel/accel.sh@19 -- # read -r var val 00:05:46.764 19:36:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.764 19:36:27 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:46.764 19:36:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.764 00:05:46.764 real 0m1.471s 00:05:46.764 user 0m1.327s 00:05:46.764 sys 0m0.145s 00:05:46.764 19:36:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.764 19:36:27 -- common/autotest_common.sh@10 -- # set +x 00:05:46.764 ************************************ 00:05:46.764 END TEST accel_copy_crc32c 00:05:46.764 ************************************ 00:05:46.764 19:36:27 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:46.764 19:36:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:46.764 19:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.764 19:36:27 -- common/autotest_common.sh@10 -- # set +x 00:05:46.764 ************************************ 00:05:46.764 START TEST accel_copy_crc32c_C2 00:05:46.764 ************************************ 00:05:46.764 19:36:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:46.764 19:36:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.764 19:36:28 -- accel/accel.sh@17 -- # local accel_module 00:05:46.764 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:46.764 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:46.764 19:36:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:46.764 19:36:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:46.764 19:36:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.764 19:36:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.764 19:36:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.764 19:36:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.764 19:36:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.764 19:36:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.764 19:36:28 -- accel/accel.sh@40 -- # local IFS=, 00:05:46.764 19:36:28 -- accel/accel.sh@41 -- # jq -r . 00:05:46.764 [2024-04-24 19:36:28.041648] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:46.764 [2024-04-24 19:36:28.041726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592610 ] 00:05:46.764 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.764 [2024-04-24 19:36:28.105266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.764 [2024-04-24 19:36:28.221800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val= 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val= 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val=0x1 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val= 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val= 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val=0 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val= 00:05:47.021 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.021 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.021 19:36:28 -- accel/accel.sh@20 -- # val=software 00:05:47.022 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 19:36:28 -- accel/accel.sh@22 -- # accel_module=software 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 19:36:28 -- accel/accel.sh@20 -- # val=32 00:05:47.022 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 19:36:28 -- accel/accel.sh@20 -- # val=32 00:05:47.022 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 19:36:28 -- accel/accel.sh@20 -- # val=1 00:05:47.022 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 19:36:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.022 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 19:36:28 -- accel/accel.sh@20 -- # val=Yes 00:05:47.022 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 19:36:28 -- accel/accel.sh@20 -- # val= 00:05:47.022 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.022 19:36:28 -- accel/accel.sh@20 -- # val= 00:05:47.022 19:36:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.022 19:36:28 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.394 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.394 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.394 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.394 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.394 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.394 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.394 19:36:29 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:48.394 19:36:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.394 00:05:48.394 real 0m1.474s 00:05:48.394 user 0m1.326s 00:05:48.394 sys 0m0.149s 00:05:48.394 19:36:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.394 19:36:29 -- common/autotest_common.sh@10 -- # set +x 00:05:48.394 ************************************ 00:05:48.394 END TEST accel_copy_crc32c_C2 00:05:48.394 ************************************ 00:05:48.394 19:36:29 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:48.394 19:36:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:48.394 19:36:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.394 19:36:29 -- common/autotest_common.sh@10 -- # set +x 00:05:48.394 ************************************ 00:05:48.394 START TEST accel_dualcast 00:05:48.394 ************************************ 00:05:48.394 19:36:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:48.394 19:36:29 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.394 19:36:29 -- accel/accel.sh@17 -- # local accel_module 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:48.394 19:36:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:48.394 19:36:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.394 19:36:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.394 19:36:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.394 19:36:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.394 19:36:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.394 19:36:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.394 19:36:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.394 19:36:29 -- accel/accel.sh@41 -- # jq -r . 00:05:48.394 [2024-04-24 19:36:29.634677] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:48.394 [2024-04-24 19:36:29.634741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592780 ] 00:05:48.394 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.394 [2024-04-24 19:36:29.696653] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.394 [2024-04-24 19:36:29.815100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.394 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.394 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.394 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.394 19:36:29 -- accel/accel.sh@20 -- # val=0x1 00:05:48.394 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.394 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val=dualcast 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val=software 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@22 -- # accel_module=software 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val=32 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val=32 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val=1 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val=Yes 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:48.395 19:36:29 -- accel/accel.sh@20 -- # val= 00:05:48.395 19:36:29 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # IFS=: 00:05:48.395 19:36:29 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:49.767 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:49.767 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:49.767 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:49.767 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:49.767 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:49.767 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 19:36:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.767 19:36:31 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:49.767 19:36:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.767 00:05:49.767 real 0m1.481s 00:05:49.767 user 0m1.331s 00:05:49.767 sys 0m0.150s 00:05:49.767 19:36:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.767 19:36:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.767 ************************************ 00:05:49.767 END TEST accel_dualcast 00:05:49.767 ************************************ 00:05:49.767 19:36:31 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:49.767 19:36:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:49.767 19:36:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.767 19:36:31 -- common/autotest_common.sh@10 -- # set +x 00:05:49.767 ************************************ 00:05:49.767 START TEST accel_compare 00:05:49.767 ************************************ 00:05:49.767 19:36:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:49.767 19:36:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.767 19:36:31 -- accel/accel.sh@17 -- # local accel_module 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:49.767 19:36:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:49.767 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:49.767 19:36:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:49.767 19:36:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.767 19:36:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.767 19:36:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.767 19:36:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.767 19:36:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.767 19:36:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.767 19:36:31 -- accel/accel.sh@40 -- # local IFS=, 00:05:49.767 19:36:31 -- accel/accel.sh@41 -- # jq -r . 00:05:49.767 [2024-04-24 19:36:31.234429] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:49.767 [2024-04-24 19:36:31.234492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592992 ] 00:05:49.767 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.026 [2024-04-24 19:36:31.297612] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.026 [2024-04-24 19:36:31.415840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val=0x1 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val=compare 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val=software 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@22 -- # accel_module=software 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val=32 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val=32 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val=1 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val=Yes 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.026 19:36:31 -- accel/accel.sh@20 -- # val= 00:05:50.026 19:36:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.026 19:36:31 -- accel/accel.sh@19 -- # read -r var val 00:05:51.399 19:36:32 -- accel/accel.sh@20 -- # val= 00:05:51.399 19:36:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # IFS=: 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # read -r var val 00:05:51.399 19:36:32 -- accel/accel.sh@20 -- # val= 00:05:51.399 19:36:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # IFS=: 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # read -r var val 00:05:51.399 19:36:32 -- accel/accel.sh@20 -- # val= 00:05:51.399 19:36:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # IFS=: 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # read -r var val 00:05:51.399 19:36:32 -- accel/accel.sh@20 -- # val= 00:05:51.399 19:36:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # IFS=: 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # read -r var val 00:05:51.399 19:36:32 -- accel/accel.sh@20 -- # val= 00:05:51.399 19:36:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # IFS=: 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # read -r var val 00:05:51.399 19:36:32 -- accel/accel.sh@20 -- # val= 00:05:51.399 19:36:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # IFS=: 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # read -r var val 00:05:51.399 19:36:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.399 19:36:32 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:51.399 19:36:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.399 00:05:51.399 real 0m1.468s 00:05:51.399 user 0m1.323s 00:05:51.399 sys 0m0.146s 00:05:51.399 19:36:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.399 19:36:32 -- common/autotest_common.sh@10 -- # set +x 00:05:51.399 ************************************ 00:05:51.399 END TEST accel_compare 00:05:51.399 ************************************ 00:05:51.399 19:36:32 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:51.399 19:36:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:51.399 19:36:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.399 19:36:32 -- common/autotest_common.sh@10 -- # set +x 00:05:51.399 ************************************ 00:05:51.399 START TEST accel_xor 00:05:51.399 ************************************ 00:05:51.399 19:36:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:51.399 19:36:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.399 19:36:32 -- accel/accel.sh@17 -- # local accel_module 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # IFS=: 00:05:51.399 19:36:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:51.399 19:36:32 -- accel/accel.sh@19 -- # read -r var val 00:05:51.399 19:36:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:51.399 19:36:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.399 19:36:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.399 19:36:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.399 19:36:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.399 19:36:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.399 19:36:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.399 19:36:32 -- accel/accel.sh@40 -- # local IFS=, 00:05:51.399 19:36:32 -- accel/accel.sh@41 -- # jq -r . 00:05:51.399 [2024-04-24 19:36:32.827093] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:51.399 [2024-04-24 19:36:32.827162] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593227 ] 00:05:51.399 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.399 [2024-04-24 19:36:32.889025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.659 [2024-04-24 19:36:33.011894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val= 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val= 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val=0x1 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val= 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val= 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val=xor 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val=2 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val= 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val=software 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@22 -- # accel_module=software 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val=32 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val=32 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val=1 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val=Yes 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val= 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:51.659 19:36:33 -- accel/accel.sh@20 -- # val= 00:05:51.659 19:36:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # IFS=: 00:05:51.659 19:36:33 -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.048 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.048 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.048 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.048 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.048 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.048 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 19:36:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.048 19:36:34 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:53.048 19:36:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.048 00:05:53.048 real 0m1.489s 00:05:53.048 user 0m1.351s 00:05:53.048 sys 0m0.139s 00:05:53.048 19:36:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.048 19:36:34 -- common/autotest_common.sh@10 -- # set +x 00:05:53.048 ************************************ 00:05:53.048 END TEST accel_xor 00:05:53.048 ************************************ 00:05:53.048 19:36:34 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:53.048 19:36:34 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:53.048 19:36:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.048 19:36:34 -- common/autotest_common.sh@10 -- # set +x 00:05:53.048 ************************************ 00:05:53.048 START TEST accel_xor 00:05:53.048 ************************************ 00:05:53.048 19:36:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:53.048 19:36:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.048 19:36:34 -- accel/accel.sh@17 -- # local accel_module 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.048 19:36:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:53.048 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.048 19:36:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:53.048 19:36:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.048 19:36:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.048 19:36:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.048 19:36:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.048 19:36:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.048 19:36:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.048 19:36:34 -- accel/accel.sh@40 -- # local IFS=, 00:05:53.048 19:36:34 -- accel/accel.sh@41 -- # jq -r . 00:05:53.048 [2024-04-24 19:36:34.440055] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:53.048 [2024-04-24 19:36:34.440124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593390 ] 00:05:53.048 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.048 [2024-04-24 19:36:34.505962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.306 [2024-04-24 19:36:34.631285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val=0x1 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val=xor 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val=3 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val=software 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@22 -- # accel_module=software 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val=32 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val=32 00:05:53.306 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.306 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.306 19:36:34 -- accel/accel.sh@20 -- # val=1 00:05:53.307 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.307 19:36:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.307 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.307 19:36:34 -- accel/accel.sh@20 -- # val=Yes 00:05:53.307 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.307 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.307 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:53.307 19:36:34 -- accel/accel.sh@20 -- # val= 00:05:53.307 19:36:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # IFS=: 00:05:53.307 19:36:34 -- accel/accel.sh@19 -- # read -r var val 00:05:54.679 19:36:35 -- accel/accel.sh@20 -- # val= 00:05:54.679 19:36:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # IFS=: 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # read -r var val 00:05:54.679 19:36:35 -- accel/accel.sh@20 -- # val= 00:05:54.679 19:36:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # IFS=: 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # read -r var val 00:05:54.679 19:36:35 -- accel/accel.sh@20 -- # val= 00:05:54.679 19:36:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # IFS=: 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # read -r var val 00:05:54.679 19:36:35 -- accel/accel.sh@20 -- # val= 00:05:54.679 19:36:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # IFS=: 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # read -r var val 00:05:54.679 19:36:35 -- accel/accel.sh@20 -- # val= 00:05:54.679 19:36:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # IFS=: 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # read -r var val 00:05:54.679 19:36:35 -- accel/accel.sh@20 -- # val= 00:05:54.679 19:36:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # IFS=: 00:05:54.679 19:36:35 -- accel/accel.sh@19 -- # read -r var val 00:05:54.679 19:36:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.679 19:36:35 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:54.679 19:36:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.679 00:05:54.679 real 0m1.486s 00:05:54.679 user 0m1.329s 00:05:54.679 sys 0m0.157s 00:05:54.679 19:36:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.679 19:36:35 -- common/autotest_common.sh@10 -- # set +x 00:05:54.679 ************************************ 00:05:54.679 END TEST accel_xor 00:05:54.679 ************************************ 00:05:54.679 19:36:35 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:54.679 19:36:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:54.679 19:36:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.679 19:36:35 -- common/autotest_common.sh@10 -- # set +x 00:05:54.679 ************************************ 00:05:54.679 START TEST accel_dif_verify 00:05:54.679 ************************************ 00:05:54.679 19:36:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:54.679 19:36:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.679 19:36:36 -- accel/accel.sh@17 -- # local accel_module 00:05:54.679 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.679 19:36:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:54.679 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.679 19:36:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:54.679 19:36:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.679 19:36:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.679 19:36:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.679 19:36:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.679 19:36:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.679 19:36:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.679 19:36:36 -- accel/accel.sh@40 -- # local IFS=, 00:05:54.679 19:36:36 -- accel/accel.sh@41 -- # jq -r . 00:05:54.679 [2024-04-24 19:36:36.049900] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:54.679 [2024-04-24 19:36:36.049973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593672 ] 00:05:54.679 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.679 [2024-04-24 19:36:36.112344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.937 [2024-04-24 19:36:36.238103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.937 19:36:36 -- accel/accel.sh@20 -- # val= 00:05:54.937 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.937 19:36:36 -- accel/accel.sh@20 -- # val= 00:05:54.937 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.937 19:36:36 -- accel/accel.sh@20 -- # val=0x1 00:05:54.937 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.937 19:36:36 -- accel/accel.sh@20 -- # val= 00:05:54.937 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.937 19:36:36 -- accel/accel.sh@20 -- # val= 00:05:54.937 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.937 19:36:36 -- accel/accel.sh@20 -- # val=dif_verify 00:05:54.937 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.937 19:36:36 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.937 19:36:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.937 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.937 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val= 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val=software 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@22 -- # accel_module=software 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val=32 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val=32 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val=1 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val=No 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val= 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:54.963 19:36:36 -- accel/accel.sh@20 -- # val= 00:05:54.963 19:36:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # IFS=: 00:05:54.963 19:36:36 -- accel/accel.sh@19 -- # read -r var val 00:05:56.341 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.341 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.341 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.341 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.341 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.341 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.341 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.341 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.341 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.341 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.341 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.341 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.341 19:36:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.341 19:36:37 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:56.341 19:36:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.341 00:05:56.341 real 0m1.493s 00:05:56.341 user 0m1.355s 00:05:56.341 sys 0m0.141s 00:05:56.341 19:36:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:56.341 19:36:37 -- common/autotest_common.sh@10 -- # set +x 00:05:56.341 ************************************ 00:05:56.341 END TEST accel_dif_verify 00:05:56.341 ************************************ 00:05:56.341 19:36:37 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:56.341 19:36:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:56.341 19:36:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.341 19:36:37 -- common/autotest_common.sh@10 -- # set +x 00:05:56.341 ************************************ 00:05:56.341 START TEST accel_dif_generate 00:05:56.341 ************************************ 00:05:56.341 19:36:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:56.341 19:36:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:56.341 19:36:37 -- accel/accel.sh@17 -- # local accel_module 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.341 19:36:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:56.341 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.341 19:36:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:56.341 19:36:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.341 19:36:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.341 19:36:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.341 19:36:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.341 19:36:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.341 19:36:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.341 19:36:37 -- accel/accel.sh@40 -- # local IFS=, 00:05:56.341 19:36:37 -- accel/accel.sh@41 -- # jq -r . 00:05:56.341 [2024-04-24 19:36:37.661415] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:56.341 [2024-04-24 19:36:37.661481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593840 ] 00:05:56.341 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.341 [2024-04-24 19:36:37.724135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.341 [2024-04-24 19:36:37.847085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val=0x1 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val=dif_generate 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val=software 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@22 -- # accel_module=software 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val=32 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val=32 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val=1 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val=No 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:56.599 19:36:37 -- accel/accel.sh@20 -- # val= 00:05:56.599 19:36:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # IFS=: 00:05:56.599 19:36:37 -- accel/accel.sh@19 -- # read -r var val 00:05:57.971 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:57.971 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:57.971 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:57.971 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:57.971 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:57.971 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:57.971 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:57.971 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:57.971 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:57.971 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:57.971 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:57.971 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:57.971 19:36:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.971 19:36:39 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:57.971 19:36:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.971 00:05:57.971 real 0m1.489s 00:05:57.971 user 0m1.342s 00:05:57.971 sys 0m0.150s 00:05:57.971 19:36:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.971 19:36:39 -- common/autotest_common.sh@10 -- # set +x 00:05:57.971 ************************************ 00:05:57.971 END TEST accel_dif_generate 00:05:57.971 ************************************ 00:05:57.971 19:36:39 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:57.971 19:36:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:57.971 19:36:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.971 19:36:39 -- common/autotest_common.sh@10 -- # set +x 00:05:57.971 ************************************ 00:05:57.971 START TEST accel_dif_generate_copy 00:05:57.971 ************************************ 00:05:57.971 19:36:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:57.971 19:36:39 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.971 19:36:39 -- accel/accel.sh@17 -- # local accel_module 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:57.971 19:36:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:57.971 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:57.971 19:36:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:57.971 19:36:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.971 19:36:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.971 19:36:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.971 19:36:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.971 19:36:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.971 19:36:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.971 19:36:39 -- accel/accel.sh@40 -- # local IFS=, 00:05:57.971 19:36:39 -- accel/accel.sh@41 -- # jq -r . 00:05:57.971 [2024-04-24 19:36:39.273425] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:57.971 [2024-04-24 19:36:39.273488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594120 ] 00:05:57.971 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.971 [2024-04-24 19:36:39.338213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.971 [2024-04-24 19:36:39.468180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val=0x1 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val=software 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@22 -- # accel_module=software 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val=32 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val=32 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val=1 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val=No 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:58.229 19:36:39 -- accel/accel.sh@20 -- # val= 00:05:58.229 19:36:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # IFS=: 00:05:58.229 19:36:39 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:40 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:40 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:40 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:40 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:40 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:40 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.629 19:36:40 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:59.629 19:36:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.629 00:05:59.629 real 0m1.485s 00:05:59.629 user 0m1.342s 00:05:59.629 sys 0m0.149s 00:05:59.629 19:36:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.629 19:36:40 -- common/autotest_common.sh@10 -- # set +x 00:05:59.629 ************************************ 00:05:59.629 END TEST accel_dif_generate_copy 00:05:59.629 ************************************ 00:05:59.629 19:36:40 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:59.629 19:36:40 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.629 19:36:40 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:59.629 19:36:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.629 19:36:40 -- common/autotest_common.sh@10 -- # set +x 00:05:59.629 ************************************ 00:05:59.629 START TEST accel_comp 00:05:59.629 ************************************ 00:05:59.629 19:36:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.629 19:36:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:59.629 19:36:40 -- accel/accel.sh@17 -- # local accel_module 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.629 19:36:40 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.629 19:36:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.629 19:36:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.629 19:36:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.629 19:36:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.629 19:36:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.629 19:36:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.629 19:36:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:59.629 19:36:40 -- accel/accel.sh@41 -- # jq -r . 00:05:59.629 [2024-04-24 19:36:40.879063] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:05:59.629 [2024-04-24 19:36:40.879129] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594287 ] 00:05:59.629 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.629 [2024-04-24 19:36:40.941832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.629 [2024-04-24 19:36:41.064522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.629 19:36:41 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:41 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:41 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:41 -- accel/accel.sh@20 -- # val=0x1 00:05:59.629 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:41 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:41 -- accel/accel.sh@20 -- # val= 00:05:59.629 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:41 -- accel/accel.sh@20 -- # val=compress 00:05:59.629 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:41 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.629 19:36:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.629 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.629 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.630 19:36:41 -- accel/accel.sh@20 -- # val= 00:05:59.630 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.630 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.630 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.901 19:36:41 -- accel/accel.sh@20 -- # val=software 00:05:59.901 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.901 19:36:41 -- accel/accel.sh@22 -- # accel_module=software 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.901 19:36:41 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.901 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.901 19:36:41 -- accel/accel.sh@20 -- # val=32 00:05:59.901 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.901 19:36:41 -- accel/accel.sh@20 -- # val=32 00:05:59.901 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.901 19:36:41 -- accel/accel.sh@20 -- # val=1 00:05:59.901 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.901 19:36:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.901 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.901 19:36:41 -- accel/accel.sh@20 -- # val=No 00:05:59.901 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.901 19:36:41 -- accel/accel.sh@20 -- # val= 00:05:59.901 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:05:59.901 19:36:41 -- accel/accel.sh@20 -- # val= 00:05:59.901 19:36:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # IFS=: 00:05:59.901 19:36:41 -- accel/accel.sh@19 -- # read -r var val 00:06:01.277 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.277 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.277 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.277 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.277 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.277 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.277 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.277 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.277 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.277 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.277 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.277 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.277 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.278 19:36:42 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:01.278 19:36:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.278 00:06:01.278 real 0m1.494s 00:06:01.278 user 0m1.352s 00:06:01.278 sys 0m0.150s 00:06:01.278 19:36:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.278 19:36:42 -- common/autotest_common.sh@10 -- # set +x 00:06:01.278 ************************************ 00:06:01.278 END TEST accel_comp 00:06:01.278 ************************************ 00:06:01.278 19:36:42 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.278 19:36:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:01.278 19:36:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.278 19:36:42 -- common/autotest_common.sh@10 -- # set +x 00:06:01.278 ************************************ 00:06:01.278 START TEST accel_decomp 00:06:01.278 ************************************ 00:06:01.278 19:36:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.278 19:36:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.278 19:36:42 -- accel/accel.sh@17 -- # local accel_module 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:01.278 19:36:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.278 19:36:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.278 19:36:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.278 19:36:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.278 19:36:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.278 19:36:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.278 19:36:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:01.278 19:36:42 -- accel/accel.sh@41 -- # jq -r . 00:06:01.278 [2024-04-24 19:36:42.491923] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:01.278 [2024-04-24 19:36:42.491991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594453 ] 00:06:01.278 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.278 [2024-04-24 19:36:42.554004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.278 [2024-04-24 19:36:42.675997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val=0x1 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val=decompress 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val=software 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@22 -- # accel_module=software 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val=32 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val=32 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val=1 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val=Yes 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:01.278 19:36:42 -- accel/accel.sh@20 -- # val= 00:06:01.278 19:36:42 -- accel/accel.sh@21 -- # case "$var" in 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # IFS=: 00:06:01.278 19:36:42 -- accel/accel.sh@19 -- # read -r var val 00:06:02.659 19:36:43 -- accel/accel.sh@20 -- # val= 00:06:02.659 19:36:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # IFS=: 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # read -r var val 00:06:02.659 19:36:43 -- accel/accel.sh@20 -- # val= 00:06:02.659 19:36:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # IFS=: 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # read -r var val 00:06:02.659 19:36:43 -- accel/accel.sh@20 -- # val= 00:06:02.659 19:36:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # IFS=: 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # read -r var val 00:06:02.659 19:36:43 -- accel/accel.sh@20 -- # val= 00:06:02.659 19:36:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # IFS=: 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # read -r var val 00:06:02.659 19:36:43 -- accel/accel.sh@20 -- # val= 00:06:02.659 19:36:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # IFS=: 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # read -r var val 00:06:02.659 19:36:43 -- accel/accel.sh@20 -- # val= 00:06:02.659 19:36:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # IFS=: 00:06:02.659 19:36:43 -- accel/accel.sh@19 -- # read -r var val 00:06:02.660 19:36:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.660 19:36:43 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.660 19:36:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.660 00:06:02.660 real 0m1.484s 00:06:02.660 user 0m1.346s 00:06:02.660 sys 0m0.146s 00:06:02.660 19:36:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.660 19:36:43 -- common/autotest_common.sh@10 -- # set +x 00:06:02.660 ************************************ 00:06:02.660 END TEST accel_decomp 00:06:02.660 ************************************ 00:06:02.660 19:36:43 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.660 19:36:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:02.660 19:36:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.660 19:36:43 -- common/autotest_common.sh@10 -- # set +x 00:06:02.660 ************************************ 00:06:02.660 START TEST accel_decmop_full 00:06:02.660 ************************************ 00:06:02.660 19:36:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.660 19:36:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.660 19:36:44 -- accel/accel.sh@17 -- # local accel_module 00:06:02.660 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.660 19:36:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.660 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.660 19:36:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:02.660 19:36:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.660 19:36:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.660 19:36:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.660 19:36:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.660 19:36:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.660 19:36:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.660 19:36:44 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.660 19:36:44 -- accel/accel.sh@41 -- # jq -r . 00:06:02.660 [2024-04-24 19:36:44.090504] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:02.660 [2024-04-24 19:36:44.090574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594729 ] 00:06:02.660 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.660 [2024-04-24 19:36:44.157185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.917 [2024-04-24 19:36:44.279106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.917 19:36:44 -- accel/accel.sh@20 -- # val= 00:06:02.917 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.917 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.917 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val= 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val= 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val=0x1 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val= 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val= 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val=decompress 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val= 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val=software 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@22 -- # accel_module=software 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val=32 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val=32 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val=1 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val=Yes 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val= 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:02.918 19:36:44 -- accel/accel.sh@20 -- # val= 00:06:02.918 19:36:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # IFS=: 00:06:02.918 19:36:44 -- accel/accel.sh@19 -- # read -r var val 00:06:04.290 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.290 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.290 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.290 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.290 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.290 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.290 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.290 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.290 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.290 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.290 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.290 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.290 19:36:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.290 19:36:45 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.290 19:36:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.290 00:06:04.290 real 0m1.506s 00:06:04.290 user 0m1.364s 00:06:04.290 sys 0m0.150s 00:06:04.290 19:36:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.290 19:36:45 -- common/autotest_common.sh@10 -- # set +x 00:06:04.290 ************************************ 00:06:04.290 END TEST accel_decmop_full 00:06:04.290 ************************************ 00:06:04.290 19:36:45 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.290 19:36:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:04.290 19:36:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.290 19:36:45 -- common/autotest_common.sh@10 -- # set +x 00:06:04.290 ************************************ 00:06:04.290 START TEST accel_decomp_mcore 00:06:04.290 ************************************ 00:06:04.290 19:36:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.290 19:36:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.290 19:36:45 -- accel/accel.sh@17 -- # local accel_module 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.290 19:36:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.290 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.290 19:36:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:04.290 19:36:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.290 19:36:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.290 19:36:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.290 19:36:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.290 19:36:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.290 19:36:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.290 19:36:45 -- accel/accel.sh@40 -- # local IFS=, 00:06:04.290 19:36:45 -- accel/accel.sh@41 -- # jq -r . 00:06:04.290 [2024-04-24 19:36:45.722274] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:04.290 [2024-04-24 19:36:45.722337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1594903 ] 00:06:04.290 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.290 [2024-04-24 19:36:45.788842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.548 [2024-04-24 19:36:45.915205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.548 [2024-04-24 19:36:45.915261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.548 [2024-04-24 19:36:45.915315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.548 [2024-04-24 19:36:45.915319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val=0xf 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val=decompress 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val=software 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@22 -- # accel_module=software 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val=32 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val=32 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val=1 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val=Yes 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:04.548 19:36:45 -- accel/accel.sh@20 -- # val= 00:06:04.548 19:36:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # IFS=: 00:06:04.548 19:36:45 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:05.922 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:05.922 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:05.922 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:05.922 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:05.922 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:05.922 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:05.922 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:05.922 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:05.922 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.922 19:36:47 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.922 19:36:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.922 00:06:05.922 real 0m1.504s 00:06:05.922 user 0m4.822s 00:06:05.922 sys 0m0.168s 00:06:05.922 19:36:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.922 19:36:47 -- common/autotest_common.sh@10 -- # set +x 00:06:05.922 ************************************ 00:06:05.922 END TEST accel_decomp_mcore 00:06:05.922 ************************************ 00:06:05.922 19:36:47 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.922 19:36:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:05.922 19:36:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.922 19:36:47 -- common/autotest_common.sh@10 -- # set +x 00:06:05.922 ************************************ 00:06:05.922 START TEST accel_decomp_full_mcore 00:06:05.922 ************************************ 00:06:05.922 19:36:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.922 19:36:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.922 19:36:47 -- accel/accel.sh@17 -- # local accel_module 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:05.922 19:36:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.922 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:05.922 19:36:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:05.922 19:36:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.922 19:36:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.922 19:36:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.922 19:36:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.922 19:36:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.922 19:36:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.922 19:36:47 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.922 19:36:47 -- accel/accel.sh@41 -- # jq -r . 00:06:05.922 [2024-04-24 19:36:47.352561] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:05.923 [2024-04-24 19:36:47.352626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595184 ] 00:06:05.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.923 [2024-04-24 19:36:47.416359] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.181 [2024-04-24 19:36:47.550162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.181 [2024-04-24 19:36:47.550212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.181 [2024-04-24 19:36:47.552654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.181 [2024-04-24 19:36:47.552659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val=0xf 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val=decompress 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val=software 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val=32 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val=32 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val=1 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.181 19:36:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.181 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.181 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.182 19:36:47 -- accel/accel.sh@20 -- # val=Yes 00:06:06.182 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.182 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.182 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.182 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:06.182 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.182 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.182 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:06.182 19:36:47 -- accel/accel.sh@20 -- # val= 00:06:06.182 19:36:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.182 19:36:47 -- accel/accel.sh@19 -- # IFS=: 00:06:06.182 19:36:47 -- accel/accel.sh@19 -- # read -r var val 00:06:07.555 19:36:48 -- accel/accel.sh@20 -- # val= 00:06:07.555 19:36:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.555 19:36:48 -- accel/accel.sh@20 -- # val= 00:06:07.555 19:36:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.555 19:36:48 -- accel/accel.sh@20 -- # val= 00:06:07.555 19:36:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.555 19:36:48 -- accel/accel.sh@20 -- # val= 00:06:07.555 19:36:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.555 19:36:48 -- accel/accel.sh@20 -- # val= 00:06:07.555 19:36:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.555 19:36:48 -- accel/accel.sh@20 -- # val= 00:06:07.555 19:36:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.555 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.556 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.556 19:36:48 -- accel/accel.sh@20 -- # val= 00:06:07.556 19:36:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.556 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.556 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.556 19:36:48 -- accel/accel.sh@20 -- # val= 00:06:07.556 19:36:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.556 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.556 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.556 19:36:48 -- accel/accel.sh@20 -- # val= 00:06:07.556 19:36:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.556 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.556 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.556 19:36:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.556 19:36:48 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.556 19:36:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.556 00:06:07.556 real 0m1.502s 00:06:07.556 user 0m4.815s 00:06:07.556 sys 0m0.150s 00:06:07.556 19:36:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.556 19:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:07.556 ************************************ 00:06:07.556 END TEST accel_decomp_full_mcore 00:06:07.556 ************************************ 00:06:07.556 19:36:48 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.556 19:36:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:07.556 19:36:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.556 19:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:07.556 ************************************ 00:06:07.556 START TEST accel_decomp_mthread 00:06:07.556 ************************************ 00:06:07.556 19:36:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.556 19:36:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.556 19:36:48 -- accel/accel.sh@17 -- # local accel_module 00:06:07.556 19:36:48 -- accel/accel.sh@19 -- # IFS=: 00:06:07.556 19:36:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.556 19:36:48 -- accel/accel.sh@19 -- # read -r var val 00:06:07.556 19:36:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:07.556 19:36:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.556 19:36:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.556 19:36:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.556 19:36:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.556 19:36:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.556 19:36:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.556 19:36:48 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.556 19:36:48 -- accel/accel.sh@41 -- # jq -r . 00:06:07.556 [2024-04-24 19:36:48.977681] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:07.556 [2024-04-24 19:36:48.977746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595356 ] 00:06:07.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.556 [2024-04-24 19:36:49.039239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.815 [2024-04-24 19:36:49.162535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val= 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val= 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val= 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val=0x1 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val= 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val= 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val=decompress 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val= 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val=software 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val=32 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val=32 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val=2 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val=Yes 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val= 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:07.815 19:36:49 -- accel/accel.sh@20 -- # val= 00:06:07.815 19:36:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # IFS=: 00:06:07.815 19:36:49 -- accel/accel.sh@19 -- # read -r var val 00:06:09.284 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.284 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.284 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.284 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.284 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.284 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.284 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.284 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.284 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.284 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.284 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.284 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.284 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.284 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.284 19:36:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.284 19:36:50 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.284 19:36:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.284 00:06:09.284 real 0m1.497s 00:06:09.284 user 0m1.355s 00:06:09.284 sys 0m0.151s 00:06:09.284 19:36:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.284 19:36:50 -- common/autotest_common.sh@10 -- # set +x 00:06:09.284 ************************************ 00:06:09.284 END TEST accel_decomp_mthread 00:06:09.284 ************************************ 00:06:09.284 19:36:50 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.284 19:36:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:09.284 19:36:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.284 19:36:50 -- common/autotest_common.sh@10 -- # set +x 00:06:09.284 ************************************ 00:06:09.284 START TEST accel_deomp_full_mthread 00:06:09.284 ************************************ 00:06:09.284 19:36:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.284 19:36:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.284 19:36:50 -- accel/accel.sh@17 -- # local accel_module 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.284 19:36:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.284 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.284 19:36:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.284 19:36:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.284 19:36:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.285 19:36:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.285 19:36:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.285 19:36:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.285 19:36:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.285 19:36:50 -- accel/accel.sh@40 -- # local IFS=, 00:06:09.285 19:36:50 -- accel/accel.sh@41 -- # jq -r . 00:06:09.285 [2024-04-24 19:36:50.599295] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:09.285 [2024-04-24 19:36:50.599361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595553 ] 00:06:09.285 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.285 [2024-04-24 19:36:50.662248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.285 [2024-04-24 19:36:50.784908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.542 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.542 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.542 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.542 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.542 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.542 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.542 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val=0x1 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val=decompress 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val=software 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@22 -- # accel_module=software 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val=32 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val=32 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val=2 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val=Yes 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:09.543 19:36:50 -- accel/accel.sh@20 -- # val= 00:06:09.543 19:36:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # IFS=: 00:06:09.543 19:36:50 -- accel/accel.sh@19 -- # read -r var val 00:06:10.917 19:36:52 -- accel/accel.sh@20 -- # val= 00:06:10.917 19:36:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # IFS=: 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # read -r var val 00:06:10.917 19:36:52 -- accel/accel.sh@20 -- # val= 00:06:10.917 19:36:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # IFS=: 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # read -r var val 00:06:10.917 19:36:52 -- accel/accel.sh@20 -- # val= 00:06:10.917 19:36:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # IFS=: 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # read -r var val 00:06:10.917 19:36:52 -- accel/accel.sh@20 -- # val= 00:06:10.917 19:36:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # IFS=: 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # read -r var val 00:06:10.917 19:36:52 -- accel/accel.sh@20 -- # val= 00:06:10.917 19:36:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # IFS=: 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # read -r var val 00:06:10.917 19:36:52 -- accel/accel.sh@20 -- # val= 00:06:10.917 19:36:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # IFS=: 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # read -r var val 00:06:10.917 19:36:52 -- accel/accel.sh@20 -- # val= 00:06:10.917 19:36:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # IFS=: 00:06:10.917 19:36:52 -- accel/accel.sh@19 -- # read -r var val 00:06:10.917 19:36:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.917 19:36:52 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.917 19:36:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.917 00:06:10.917 real 0m1.533s 00:06:10.917 user 0m1.398s 00:06:10.917 sys 0m0.143s 00:06:10.917 19:36:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.917 19:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:10.917 ************************************ 00:06:10.917 END TEST accel_deomp_full_mthread 00:06:10.917 ************************************ 00:06:10.917 19:36:52 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:10.917 19:36:52 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:10.917 19:36:52 -- accel/accel.sh@137 -- # build_accel_config 00:06:10.917 19:36:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:10.917 19:36:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.917 19:36:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.917 19:36:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.917 19:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:10.917 19:36:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.917 19:36:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.917 19:36:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.917 19:36:52 -- accel/accel.sh@40 -- # local IFS=, 00:06:10.917 19:36:52 -- accel/accel.sh@41 -- # jq -r . 00:06:10.917 ************************************ 00:06:10.917 START TEST accel_dif_functional_tests 00:06:10.917 ************************************ 00:06:10.917 19:36:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:10.917 [2024-04-24 19:36:52.269295] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:10.917 [2024-04-24 19:36:52.269355] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595802 ] 00:06:10.917 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.917 [2024-04-24 19:36:52.329150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.175 [2024-04-24 19:36:52.454735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.175 [2024-04-24 19:36:52.454793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.175 [2024-04-24 19:36:52.454797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.175 00:06:11.175 00:06:11.175 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.175 http://cunit.sourceforge.net/ 00:06:11.175 00:06:11.175 00:06:11.175 Suite: accel_dif 00:06:11.175 Test: verify: DIF generated, GUARD check ...passed 00:06:11.175 Test: verify: DIF generated, APPTAG check ...passed 00:06:11.175 Test: verify: DIF generated, REFTAG check ...passed 00:06:11.175 Test: verify: DIF not generated, GUARD check ...[2024-04-24 19:36:52.557355] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:11.175 [2024-04-24 19:36:52.557424] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:11.175 passed 00:06:11.175 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 19:36:52.557469] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:11.175 [2024-04-24 19:36:52.557500] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:11.175 passed 00:06:11.175 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 19:36:52.557536] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:11.175 [2024-04-24 19:36:52.557567] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:11.175 passed 00:06:11.175 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:11.175 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 19:36:52.557648] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:11.175 passed 00:06:11.176 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:11.176 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:11.176 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:11.176 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 19:36:52.557822] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:11.176 passed 00:06:11.176 Test: generate copy: DIF generated, GUARD check ...passed 00:06:11.176 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:11.176 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:11.176 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:11.176 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:11.176 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:11.176 Test: generate copy: iovecs-len validate ...[2024-04-24 19:36:52.558091] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:11.176 passed 00:06:11.176 Test: generate copy: buffer alignment validate ...passed 00:06:11.176 00:06:11.176 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.176 suites 1 1 n/a 0 0 00:06:11.176 tests 20 20 20 0 0 00:06:11.176 asserts 204 204 204 0 n/a 00:06:11.176 00:06:11.176 Elapsed time = 0.003 seconds 00:06:11.434 00:06:11.434 real 0m0.600s 00:06:11.434 user 0m0.880s 00:06:11.434 sys 0m0.182s 00:06:11.434 19:36:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.434 19:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:11.434 ************************************ 00:06:11.434 END TEST accel_dif_functional_tests 00:06:11.434 ************************************ 00:06:11.434 00:06:11.434 real 0m35.568s 00:06:11.434 user 0m37.677s 00:06:11.434 sys 0m5.650s 00:06:11.434 19:36:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.434 19:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:11.434 ************************************ 00:06:11.434 END TEST accel 00:06:11.434 ************************************ 00:06:11.434 19:36:52 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:11.434 19:36:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.434 19:36:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.434 19:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:11.691 ************************************ 00:06:11.691 START TEST accel_rpc 00:06:11.691 ************************************ 00:06:11.691 19:36:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:11.691 * Looking for test storage... 00:06:11.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:11.691 19:36:53 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.691 19:36:53 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1595996 00:06:11.691 19:36:53 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:11.691 19:36:53 -- accel/accel_rpc.sh@15 -- # waitforlisten 1595996 00:06:11.691 19:36:53 -- common/autotest_common.sh@817 -- # '[' -z 1595996 ']' 00:06:11.691 19:36:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.691 19:36:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.691 19:36:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.691 19:36:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.691 19:36:53 -- common/autotest_common.sh@10 -- # set +x 00:06:11.691 [2024-04-24 19:36:53.077874] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:11.691 [2024-04-24 19:36:53.077970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595996 ] 00:06:11.691 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.691 [2024-04-24 19:36:53.139682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.949 [2024-04-24 19:36:53.259701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.881 19:36:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:12.881 19:36:54 -- common/autotest_common.sh@850 -- # return 0 00:06:12.881 19:36:54 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:12.881 19:36:54 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:12.881 19:36:54 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:12.881 19:36:54 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:12.881 19:36:54 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:12.881 19:36:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.881 19:36:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.881 19:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:12.881 ************************************ 00:06:12.881 START TEST accel_assign_opcode 00:06:12.881 ************************************ 00:06:12.881 19:36:54 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:12.881 19:36:54 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:12.881 19:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.881 19:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:12.881 [2024-04-24 19:36:54.146420] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:12.881 19:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.881 19:36:54 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:12.881 19:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.881 19:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:12.881 [2024-04-24 19:36:54.154405] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:12.881 19:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.881 19:36:54 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:12.881 19:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.881 19:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:13.139 19:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.139 19:36:54 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:13.139 19:36:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:13.139 19:36:54 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:13.139 19:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:13.139 19:36:54 -- accel/accel_rpc.sh@42 -- # grep software 00:06:13.139 19:36:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:13.139 software 00:06:13.139 00:06:13.139 real 0m0.308s 00:06:13.139 user 0m0.041s 00:06:13.139 sys 0m0.006s 00:06:13.139 19:36:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.139 19:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:13.139 ************************************ 00:06:13.139 END TEST accel_assign_opcode 00:06:13.139 ************************************ 00:06:13.139 19:36:54 -- accel/accel_rpc.sh@55 -- # killprocess 1595996 00:06:13.139 19:36:54 -- common/autotest_common.sh@936 -- # '[' -z 1595996 ']' 00:06:13.139 19:36:54 -- common/autotest_common.sh@940 -- # kill -0 1595996 00:06:13.139 19:36:54 -- common/autotest_common.sh@941 -- # uname 00:06:13.139 19:36:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.139 19:36:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1595996 00:06:13.139 19:36:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.139 19:36:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.139 19:36:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1595996' 00:06:13.139 killing process with pid 1595996 00:06:13.139 19:36:54 -- common/autotest_common.sh@955 -- # kill 1595996 00:06:13.139 19:36:54 -- common/autotest_common.sh@960 -- # wait 1595996 00:06:13.708 00:06:13.708 real 0m1.990s 00:06:13.708 user 0m2.150s 00:06:13.708 sys 0m0.516s 00:06:13.708 19:36:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:13.708 19:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 ************************************ 00:06:13.708 END TEST accel_rpc 00:06:13.708 ************************************ 00:06:13.708 19:36:54 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:13.708 19:36:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.708 19:36:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.708 19:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 ************************************ 00:06:13.708 START TEST app_cmdline 00:06:13.708 ************************************ 00:06:13.708 19:36:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:13.708 * Looking for test storage... 00:06:13.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:13.708 19:36:55 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:13.708 19:36:55 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1596266 00:06:13.708 19:36:55 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:13.708 19:36:55 -- app/cmdline.sh@18 -- # waitforlisten 1596266 00:06:13.708 19:36:55 -- common/autotest_common.sh@817 -- # '[' -z 1596266 ']' 00:06:13.708 19:36:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.708 19:36:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.708 19:36:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.708 19:36:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.708 19:36:55 -- common/autotest_common.sh@10 -- # set +x 00:06:13.708 [2024-04-24 19:36:55.197110] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:13.708 [2024-04-24 19:36:55.197233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1596266 ] 00:06:13.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.966 [2024-04-24 19:36:55.255264] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.966 [2024-04-24 19:36:55.361315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.224 19:36:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:14.224 19:36:55 -- common/autotest_common.sh@850 -- # return 0 00:06:14.224 19:36:55 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:14.482 { 00:06:14.482 "version": "SPDK v24.05-pre git sha1 166ede64d", 00:06:14.482 "fields": { 00:06:14.482 "major": 24, 00:06:14.482 "minor": 5, 00:06:14.482 "patch": 0, 00:06:14.482 "suffix": "-pre", 00:06:14.482 "commit": "166ede64d" 00:06:14.482 } 00:06:14.482 } 00:06:14.482 19:36:55 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:14.482 19:36:55 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:14.482 19:36:55 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:14.482 19:36:55 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:14.482 19:36:55 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:14.482 19:36:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:14.482 19:36:55 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:14.482 19:36:55 -- common/autotest_common.sh@10 -- # set +x 00:06:14.482 19:36:55 -- app/cmdline.sh@26 -- # sort 00:06:14.482 19:36:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:14.482 19:36:55 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:14.482 19:36:55 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:14.482 19:36:55 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.482 19:36:55 -- common/autotest_common.sh@638 -- # local es=0 00:06:14.482 19:36:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.482 19:36:55 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.482 19:36:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.482 19:36:55 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.482 19:36:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.482 19:36:55 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.482 19:36:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:14.482 19:36:55 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:14.482 19:36:55 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:14.482 19:36:55 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.739 request: 00:06:14.739 { 00:06:14.739 "method": "env_dpdk_get_mem_stats", 00:06:14.739 "req_id": 1 00:06:14.739 } 00:06:14.739 Got JSON-RPC error response 00:06:14.739 response: 00:06:14.739 { 00:06:14.739 "code": -32601, 00:06:14.739 "message": "Method not found" 00:06:14.739 } 00:06:14.739 19:36:56 -- common/autotest_common.sh@641 -- # es=1 00:06:14.739 19:36:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:14.739 19:36:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:14.739 19:36:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:14.740 19:36:56 -- app/cmdline.sh@1 -- # killprocess 1596266 00:06:14.740 19:36:56 -- common/autotest_common.sh@936 -- # '[' -z 1596266 ']' 00:06:14.740 19:36:56 -- common/autotest_common.sh@940 -- # kill -0 1596266 00:06:14.740 19:36:56 -- common/autotest_common.sh@941 -- # uname 00:06:14.740 19:36:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.997 19:36:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1596266 00:06:14.997 19:36:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.997 19:36:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.997 19:36:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1596266' 00:06:14.997 killing process with pid 1596266 00:06:14.997 19:36:56 -- common/autotest_common.sh@955 -- # kill 1596266 00:06:14.997 19:36:56 -- common/autotest_common.sh@960 -- # wait 1596266 00:06:15.258 00:06:15.258 real 0m1.664s 00:06:15.258 user 0m2.040s 00:06:15.258 sys 0m0.480s 00:06:15.258 19:36:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.258 19:36:56 -- common/autotest_common.sh@10 -- # set +x 00:06:15.258 ************************************ 00:06:15.258 END TEST app_cmdline 00:06:15.258 ************************************ 00:06:15.518 19:36:56 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:15.518 19:36:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.518 19:36:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.518 19:36:56 -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 ************************************ 00:06:15.518 START TEST version 00:06:15.518 ************************************ 00:06:15.518 19:36:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:15.518 * Looking for test storage... 00:06:15.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:15.518 19:36:56 -- app/version.sh@17 -- # get_header_version major 00:06:15.518 19:36:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.518 19:36:56 -- app/version.sh@14 -- # cut -f2 00:06:15.518 19:36:56 -- app/version.sh@14 -- # tr -d '"' 00:06:15.518 19:36:56 -- app/version.sh@17 -- # major=24 00:06:15.518 19:36:56 -- app/version.sh@18 -- # get_header_version minor 00:06:15.518 19:36:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.518 19:36:56 -- app/version.sh@14 -- # cut -f2 00:06:15.518 19:36:56 -- app/version.sh@14 -- # tr -d '"' 00:06:15.518 19:36:56 -- app/version.sh@18 -- # minor=5 00:06:15.518 19:36:56 -- app/version.sh@19 -- # get_header_version patch 00:06:15.518 19:36:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.518 19:36:56 -- app/version.sh@14 -- # cut -f2 00:06:15.518 19:36:56 -- app/version.sh@14 -- # tr -d '"' 00:06:15.518 19:36:56 -- app/version.sh@19 -- # patch=0 00:06:15.518 19:36:56 -- app/version.sh@20 -- # get_header_version suffix 00:06:15.518 19:36:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:15.518 19:36:56 -- app/version.sh@14 -- # cut -f2 00:06:15.518 19:36:56 -- app/version.sh@14 -- # tr -d '"' 00:06:15.518 19:36:56 -- app/version.sh@20 -- # suffix=-pre 00:06:15.518 19:36:56 -- app/version.sh@22 -- # version=24.5 00:06:15.518 19:36:56 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:15.518 19:36:56 -- app/version.sh@28 -- # version=24.5rc0 00:06:15.518 19:36:56 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:15.518 19:36:56 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:15.518 19:36:56 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:15.518 19:36:56 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:15.518 00:06:15.518 real 0m0.102s 00:06:15.518 user 0m0.061s 00:06:15.518 sys 0m0.063s 00:06:15.518 19:36:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.518 19:36:56 -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 ************************************ 00:06:15.518 END TEST version 00:06:15.518 ************************************ 00:06:15.518 19:36:56 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:15.518 19:36:56 -- spdk/autotest.sh@194 -- # uname -s 00:06:15.518 19:36:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:15.518 19:36:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.518 19:36:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:15.518 19:36:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:15.518 19:36:57 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:15.518 19:36:57 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:15.518 19:36:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:15.518 19:36:57 -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 19:36:57 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:15.518 19:36:57 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:15.518 19:36:57 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:15.518 19:36:57 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:15.518 19:36:57 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:15.518 19:36:57 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:15.518 19:36:57 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.518 19:36:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:15.518 19:36:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.518 19:36:57 -- common/autotest_common.sh@10 -- # set +x 00:06:15.777 ************************************ 00:06:15.777 START TEST nvmf_tcp 00:06:15.777 ************************************ 00:06:15.777 19:36:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:15.777 * Looking for test storage... 00:06:15.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:15.777 19:36:57 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:15.777 19:36:57 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.777 19:36:57 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.777 19:36:57 -- nvmf/common.sh@7 -- # uname -s 00:06:15.777 19:36:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.777 19:36:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.777 19:36:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.777 19:36:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.777 19:36:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.777 19:36:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.777 19:36:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.777 19:36:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.777 19:36:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.777 19:36:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.777 19:36:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.777 19:36:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:15.777 19:36:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.777 19:36:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.777 19:36:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.777 19:36:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.777 19:36:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.777 19:36:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.777 19:36:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.777 19:36:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.777 19:36:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.777 19:36:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.777 19:36:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.777 19:36:57 -- paths/export.sh@5 -- # export PATH 00:06:15.777 19:36:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.777 19:36:57 -- nvmf/common.sh@47 -- # : 0 00:06:15.777 19:36:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.777 19:36:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.777 19:36:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.777 19:36:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.777 19:36:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.778 19:36:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.778 19:36:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.778 19:36:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.778 19:36:57 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:15.778 19:36:57 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:15.778 19:36:57 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:15.778 19:36:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:15.778 19:36:57 -- common/autotest_common.sh@10 -- # set +x 00:06:15.778 19:36:57 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:15.778 19:36:57 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:15.778 19:36:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:15.778 19:36:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.778 19:36:57 -- common/autotest_common.sh@10 -- # set +x 00:06:16.038 ************************************ 00:06:16.038 START TEST nvmf_example 00:06:16.038 ************************************ 00:06:16.038 19:36:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:16.038 * Looking for test storage... 00:06:16.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.038 19:36:57 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.038 19:36:57 -- nvmf/common.sh@7 -- # uname -s 00:06:16.038 19:36:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.038 19:36:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.038 19:36:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.038 19:36:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.038 19:36:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.038 19:36:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.038 19:36:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.038 19:36:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.038 19:36:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.038 19:36:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.038 19:36:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.038 19:36:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.038 19:36:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.038 19:36:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.038 19:36:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.038 19:36:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.038 19:36:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.038 19:36:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.038 19:36:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.038 19:36:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.038 19:36:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.038 19:36:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.038 19:36:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.038 19:36:57 -- paths/export.sh@5 -- # export PATH 00:06:16.038 19:36:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.038 19:36:57 -- nvmf/common.sh@47 -- # : 0 00:06:16.038 19:36:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.038 19:36:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.038 19:36:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.038 19:36:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.038 19:36:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.038 19:36:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.038 19:36:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.038 19:36:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.038 19:36:57 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:16.038 19:36:57 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:16.038 19:36:57 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:16.038 19:36:57 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:16.038 19:36:57 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:16.038 19:36:57 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:16.038 19:36:57 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:16.038 19:36:57 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:16.038 19:36:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:16.038 19:36:57 -- common/autotest_common.sh@10 -- # set +x 00:06:16.038 19:36:57 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:16.038 19:36:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:16.038 19:36:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.038 19:36:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:16.038 19:36:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:16.038 19:36:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:16.038 19:36:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.038 19:36:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:16.038 19:36:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.038 19:36:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:16.038 19:36:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:16.038 19:36:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:16.038 19:36:57 -- common/autotest_common.sh@10 -- # set +x 00:06:17.946 19:36:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:17.946 19:36:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:17.946 19:36:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:17.946 19:36:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:17.946 19:36:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:17.946 19:36:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:17.946 19:36:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:17.946 19:36:59 -- nvmf/common.sh@295 -- # net_devs=() 00:06:17.946 19:36:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:17.946 19:36:59 -- nvmf/common.sh@296 -- # e810=() 00:06:17.946 19:36:59 -- nvmf/common.sh@296 -- # local -ga e810 00:06:17.946 19:36:59 -- nvmf/common.sh@297 -- # x722=() 00:06:17.946 19:36:59 -- nvmf/common.sh@297 -- # local -ga x722 00:06:17.946 19:36:59 -- nvmf/common.sh@298 -- # mlx=() 00:06:17.946 19:36:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:17.946 19:36:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.946 19:36:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:17.946 19:36:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:17.946 19:36:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:17.946 19:36:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.946 19:36:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:17.946 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:17.946 19:36:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:17.946 19:36:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:17.946 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:17.946 19:36:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:17.946 19:36:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.946 19:36:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.946 19:36:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:17.946 19:36:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.946 19:36:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:17.946 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:17.946 19:36:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.946 19:36:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:17.946 19:36:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.946 19:36:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:17.946 19:36:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.946 19:36:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:17.946 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:17.946 19:36:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.946 19:36:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:17.946 19:36:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:17.946 19:36:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:17.946 19:36:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:17.946 19:36:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:17.946 19:36:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.946 19:36:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:17.946 19:36:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:17.946 19:36:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:17.946 19:36:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:17.946 19:36:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:17.946 19:36:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:17.946 19:36:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.946 19:36:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:17.946 19:36:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:17.946 19:36:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:17.946 19:36:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:17.946 19:36:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:17.946 19:36:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:17.946 19:36:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:17.946 19:36:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:18.205 19:36:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:18.205 19:36:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:18.205 19:36:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:18.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:18.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:06:18.205 00:06:18.205 --- 10.0.0.2 ping statistics --- 00:06:18.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.205 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:06:18.205 19:36:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:18.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:18.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:06:18.205 00:06:18.205 --- 10.0.0.1 ping statistics --- 00:06:18.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.205 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:06:18.205 19:36:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:18.205 19:36:59 -- nvmf/common.sh@411 -- # return 0 00:06:18.205 19:36:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:18.205 19:36:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:18.205 19:36:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:18.205 19:36:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:18.205 19:36:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:18.205 19:36:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:18.205 19:36:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:18.205 19:36:59 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:18.205 19:36:59 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:18.205 19:36:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:18.205 19:36:59 -- common/autotest_common.sh@10 -- # set +x 00:06:18.205 19:36:59 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:18.205 19:36:59 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:18.205 19:36:59 -- target/nvmf_example.sh@34 -- # nvmfpid=1598287 00:06:18.205 19:36:59 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:18.205 19:36:59 -- target/nvmf_example.sh@36 -- # waitforlisten 1598287 00:06:18.205 19:36:59 -- common/autotest_common.sh@817 -- # '[' -z 1598287 ']' 00:06:18.205 19:36:59 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:18.205 19:36:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.205 19:36:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.205 19:36:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.205 19:36:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.205 19:36:59 -- common/autotest_common.sh@10 -- # set +x 00:06:18.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.141 19:37:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.141 19:37:00 -- common/autotest_common.sh@850 -- # return 0 00:06:19.141 19:37:00 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:19.141 19:37:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:19.141 19:37:00 -- common/autotest_common.sh@10 -- # set +x 00:06:19.141 19:37:00 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:19.141 19:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.141 19:37:00 -- common/autotest_common.sh@10 -- # set +x 00:06:19.141 19:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.141 19:37:00 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:19.141 19:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.141 19:37:00 -- common/autotest_common.sh@10 -- # set +x 00:06:19.141 19:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.141 19:37:00 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:19.141 19:37:00 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:19.141 19:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.141 19:37:00 -- common/autotest_common.sh@10 -- # set +x 00:06:19.141 19:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.141 19:37:00 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:19.141 19:37:00 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:19.141 19:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.141 19:37:00 -- common/autotest_common.sh@10 -- # set +x 00:06:19.141 19:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.141 19:37:00 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.141 19:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.141 19:37:00 -- common/autotest_common.sh@10 -- # set +x 00:06:19.141 19:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.142 19:37:00 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:19.142 19:37:00 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:19.142 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.353 Initializing NVMe Controllers 00:06:31.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:31.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:31.353 Initialization complete. Launching workers. 00:06:31.353 ======================================================== 00:06:31.353 Latency(us) 00:06:31.353 Device Information : IOPS MiB/s Average min max 00:06:31.353 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14833.80 57.94 4315.72 885.69 16430.04 00:06:31.353 ======================================================== 00:06:31.353 Total : 14833.80 57.94 4315.72 885.69 16430.04 00:06:31.353 00:06:31.353 19:37:10 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:31.354 19:37:10 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:31.354 19:37:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:31.354 19:37:10 -- nvmf/common.sh@117 -- # sync 00:06:31.354 19:37:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:31.354 19:37:10 -- nvmf/common.sh@120 -- # set +e 00:06:31.354 19:37:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:31.354 19:37:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:31.354 rmmod nvme_tcp 00:06:31.354 rmmod nvme_fabrics 00:06:31.354 rmmod nvme_keyring 00:06:31.354 19:37:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:31.354 19:37:10 -- nvmf/common.sh@124 -- # set -e 00:06:31.354 19:37:10 -- nvmf/common.sh@125 -- # return 0 00:06:31.354 19:37:10 -- nvmf/common.sh@478 -- # '[' -n 1598287 ']' 00:06:31.354 19:37:10 -- nvmf/common.sh@479 -- # killprocess 1598287 00:06:31.354 19:37:10 -- common/autotest_common.sh@936 -- # '[' -z 1598287 ']' 00:06:31.354 19:37:10 -- common/autotest_common.sh@940 -- # kill -0 1598287 00:06:31.354 19:37:10 -- common/autotest_common.sh@941 -- # uname 00:06:31.354 19:37:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.354 19:37:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1598287 00:06:31.354 19:37:10 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:31.354 19:37:10 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:31.354 19:37:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1598287' 00:06:31.354 killing process with pid 1598287 00:06:31.354 19:37:10 -- common/autotest_common.sh@955 -- # kill 1598287 00:06:31.354 19:37:10 -- common/autotest_common.sh@960 -- # wait 1598287 00:06:31.354 nvmf threads initialize successfully 00:06:31.354 bdev subsystem init successfully 00:06:31.354 created a nvmf target service 00:06:31.354 create targets's poll groups done 00:06:31.354 all subsystems of target started 00:06:31.354 nvmf target is running 00:06:31.354 all subsystems of target stopped 00:06:31.354 destroy targets's poll groups done 00:06:31.354 destroyed the nvmf target service 00:06:31.354 bdev subsystem finish successfully 00:06:31.354 nvmf threads destroy successfully 00:06:31.354 19:37:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:31.354 19:37:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:31.354 19:37:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:31.354 19:37:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:31.354 19:37:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:31.354 19:37:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.354 19:37:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:31.354 19:37:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.925 19:37:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:31.925 19:37:13 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:31.925 19:37:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:31.925 19:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.925 00:06:31.925 real 0m15.900s 00:06:31.925 user 0m45.041s 00:06:31.925 sys 0m3.278s 00:06:31.925 19:37:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.925 19:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.925 ************************************ 00:06:31.925 END TEST nvmf_example 00:06:31.925 ************************************ 00:06:31.925 19:37:13 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:31.925 19:37:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:31.925 19:37:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.925 19:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:31.925 ************************************ 00:06:31.925 START TEST nvmf_filesystem 00:06:31.925 ************************************ 00:06:31.925 19:37:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:31.925 * Looking for test storage... 00:06:31.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.925 19:37:13 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:31.925 19:37:13 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:31.925 19:37:13 -- common/autotest_common.sh@34 -- # set -e 00:06:31.925 19:37:13 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:31.925 19:37:13 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:31.925 19:37:13 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:31.925 19:37:13 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:31.925 19:37:13 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:31.925 19:37:13 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:31.925 19:37:13 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:31.925 19:37:13 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:31.925 19:37:13 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:31.925 19:37:13 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:31.925 19:37:13 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:31.925 19:37:13 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:31.925 19:37:13 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:31.925 19:37:13 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:31.925 19:37:13 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:31.925 19:37:13 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:31.925 19:37:13 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:31.925 19:37:13 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:31.925 19:37:13 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:31.925 19:37:13 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:31.925 19:37:13 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:31.925 19:37:13 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:31.925 19:37:13 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:31.925 19:37:13 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:31.925 19:37:13 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:31.925 19:37:13 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:31.925 19:37:13 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:31.925 19:37:13 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:31.925 19:37:13 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:31.925 19:37:13 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:31.925 19:37:13 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:31.925 19:37:13 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:31.925 19:37:13 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:31.925 19:37:13 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:31.925 19:37:13 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:31.925 19:37:13 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:31.925 19:37:13 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:31.925 19:37:13 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:31.925 19:37:13 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:31.925 19:37:13 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:31.925 19:37:13 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:31.925 19:37:13 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:31.925 19:37:13 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:31.925 19:37:13 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:31.925 19:37:13 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:31.925 19:37:13 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:31.925 19:37:13 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:31.925 19:37:13 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:31.925 19:37:13 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:31.925 19:37:13 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:31.925 19:37:13 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:31.925 19:37:13 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:31.925 19:37:13 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:31.925 19:37:13 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:31.925 19:37:13 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:31.925 19:37:13 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:31.925 19:37:13 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:31.925 19:37:13 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:31.925 19:37:13 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:31.925 19:37:13 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:31.925 19:37:13 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:31.925 19:37:13 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:31.925 19:37:13 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:31.925 19:37:13 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:31.925 19:37:13 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:31.925 19:37:13 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:31.925 19:37:13 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:31.925 19:37:13 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:31.925 19:37:13 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:31.925 19:37:13 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:31.925 19:37:13 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:31.925 19:37:13 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:31.925 19:37:13 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:31.925 19:37:13 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:31.925 19:37:13 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:31.925 19:37:13 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:31.925 19:37:13 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:31.925 19:37:13 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:31.925 19:37:13 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:31.925 19:37:13 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:31.925 19:37:13 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:31.925 19:37:13 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:31.925 19:37:13 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:31.925 19:37:13 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:31.925 19:37:13 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:31.925 19:37:13 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:31.925 19:37:13 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:31.925 19:37:13 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:31.925 19:37:13 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:31.925 19:37:13 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:31.925 19:37:13 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:31.925 19:37:13 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:31.925 19:37:13 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:31.925 19:37:13 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:31.925 19:37:13 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:31.925 19:37:13 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:31.925 19:37:13 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:31.925 19:37:13 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:31.925 19:37:13 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:31.925 19:37:13 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:31.925 19:37:13 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:31.925 19:37:13 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:31.925 19:37:13 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:31.925 #define SPDK_CONFIG_H 00:06:31.925 #define SPDK_CONFIG_APPS 1 00:06:31.925 #define SPDK_CONFIG_ARCH native 00:06:31.925 #undef SPDK_CONFIG_ASAN 00:06:31.925 #undef SPDK_CONFIG_AVAHI 00:06:31.925 #undef SPDK_CONFIG_CET 00:06:31.925 #define SPDK_CONFIG_COVERAGE 1 00:06:31.925 #define SPDK_CONFIG_CROSS_PREFIX 00:06:31.925 #undef SPDK_CONFIG_CRYPTO 00:06:31.925 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:31.925 #undef SPDK_CONFIG_CUSTOMOCF 00:06:31.925 #undef SPDK_CONFIG_DAOS 00:06:31.925 #define SPDK_CONFIG_DAOS_DIR 00:06:31.925 #define SPDK_CONFIG_DEBUG 1 00:06:31.925 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:31.925 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:31.925 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:31.925 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:31.925 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:31.925 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:31.925 #define SPDK_CONFIG_EXAMPLES 1 00:06:31.925 #undef SPDK_CONFIG_FC 00:06:31.925 #define SPDK_CONFIG_FC_PATH 00:06:31.925 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:31.925 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:31.925 #undef SPDK_CONFIG_FUSE 00:06:31.925 #undef SPDK_CONFIG_FUZZER 00:06:31.925 #define SPDK_CONFIG_FUZZER_LIB 00:06:31.925 #undef SPDK_CONFIG_GOLANG 00:06:31.925 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:31.925 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:31.925 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:31.925 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:31.925 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:31.925 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:31.925 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:31.925 #define SPDK_CONFIG_IDXD 1 00:06:31.925 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:31.925 #undef SPDK_CONFIG_IPSEC_MB 00:06:31.925 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:31.925 #define SPDK_CONFIG_ISAL 1 00:06:31.925 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:31.925 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:31.925 #define SPDK_CONFIG_LIBDIR 00:06:31.925 #undef SPDK_CONFIG_LTO 00:06:31.925 #define SPDK_CONFIG_MAX_LCORES 00:06:31.925 #define SPDK_CONFIG_NVME_CUSE 1 00:06:31.925 #undef SPDK_CONFIG_OCF 00:06:31.925 #define SPDK_CONFIG_OCF_PATH 00:06:31.925 #define SPDK_CONFIG_OPENSSL_PATH 00:06:31.925 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:31.925 #define SPDK_CONFIG_PGO_DIR 00:06:31.925 #undef SPDK_CONFIG_PGO_USE 00:06:31.925 #define SPDK_CONFIG_PREFIX /usr/local 00:06:31.925 #undef SPDK_CONFIG_RAID5F 00:06:31.925 #undef SPDK_CONFIG_RBD 00:06:31.925 #define SPDK_CONFIG_RDMA 1 00:06:31.925 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:31.925 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:31.925 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:31.925 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:31.925 #define SPDK_CONFIG_SHARED 1 00:06:31.925 #undef SPDK_CONFIG_SMA 00:06:31.925 #define SPDK_CONFIG_TESTS 1 00:06:31.925 #undef SPDK_CONFIG_TSAN 00:06:31.925 #define SPDK_CONFIG_UBLK 1 00:06:31.925 #define SPDK_CONFIG_UBSAN 1 00:06:31.925 #undef SPDK_CONFIG_UNIT_TESTS 00:06:31.925 #undef SPDK_CONFIG_URING 00:06:31.925 #define SPDK_CONFIG_URING_PATH 00:06:31.925 #undef SPDK_CONFIG_URING_ZNS 00:06:31.925 #undef SPDK_CONFIG_USDT 00:06:31.925 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:31.925 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:31.925 #define SPDK_CONFIG_VFIO_USER 1 00:06:31.925 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:31.925 #define SPDK_CONFIG_VHOST 1 00:06:31.925 #define SPDK_CONFIG_VIRTIO 1 00:06:31.925 #undef SPDK_CONFIG_VTUNE 00:06:31.925 #define SPDK_CONFIG_VTUNE_DIR 00:06:31.925 #define SPDK_CONFIG_WERROR 1 00:06:31.925 #define SPDK_CONFIG_WPDK_DIR 00:06:31.925 #undef SPDK_CONFIG_XNVME 00:06:31.925 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:31.925 19:37:13 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:31.925 19:37:13 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.925 19:37:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.925 19:37:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.925 19:37:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.925 19:37:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.925 19:37:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.925 19:37:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.925 19:37:13 -- paths/export.sh@5 -- # export PATH 00:06:31.925 19:37:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.925 19:37:13 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:31.925 19:37:13 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:31.925 19:37:13 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:31.925 19:37:13 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:31.925 19:37:13 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:31.925 19:37:13 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:31.925 19:37:13 -- pm/common@67 -- # TEST_TAG=N/A 00:06:31.925 19:37:13 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:31.925 19:37:13 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:31.925 19:37:13 -- pm/common@71 -- # uname -s 00:06:31.925 19:37:13 -- pm/common@71 -- # PM_OS=Linux 00:06:31.925 19:37:13 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:31.925 19:37:13 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:31.925 19:37:13 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:31.925 19:37:13 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:31.925 19:37:13 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:31.925 19:37:13 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:31.925 19:37:13 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:31.925 19:37:13 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:31.925 19:37:13 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:31.925 19:37:13 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:31.925 19:37:13 -- common/autotest_common.sh@57 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:31.925 19:37:13 -- common/autotest_common.sh@61 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:31.925 19:37:13 -- common/autotest_common.sh@63 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:31.925 19:37:13 -- common/autotest_common.sh@65 -- # : 1 00:06:31.925 19:37:13 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:31.925 19:37:13 -- common/autotest_common.sh@67 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:31.925 19:37:13 -- common/autotest_common.sh@69 -- # : 00:06:31.925 19:37:13 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:31.925 19:37:13 -- common/autotest_common.sh@71 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:31.925 19:37:13 -- common/autotest_common.sh@73 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:31.925 19:37:13 -- common/autotest_common.sh@75 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:31.925 19:37:13 -- common/autotest_common.sh@77 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:31.925 19:37:13 -- common/autotest_common.sh@79 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:31.925 19:37:13 -- common/autotest_common.sh@81 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:31.925 19:37:13 -- common/autotest_common.sh@83 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:31.925 19:37:13 -- common/autotest_common.sh@85 -- # : 1 00:06:31.925 19:37:13 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:31.925 19:37:13 -- common/autotest_common.sh@87 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:31.925 19:37:13 -- common/autotest_common.sh@89 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:31.925 19:37:13 -- common/autotest_common.sh@91 -- # : 1 00:06:31.925 19:37:13 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:31.925 19:37:13 -- common/autotest_common.sh@93 -- # : 1 00:06:31.925 19:37:13 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:31.925 19:37:13 -- common/autotest_common.sh@95 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:31.925 19:37:13 -- common/autotest_common.sh@97 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:31.925 19:37:13 -- common/autotest_common.sh@99 -- # : 0 00:06:31.925 19:37:13 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:31.925 19:37:13 -- common/autotest_common.sh@101 -- # : tcp 00:06:31.926 19:37:13 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:31.926 19:37:13 -- common/autotest_common.sh@103 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:31.926 19:37:13 -- common/autotest_common.sh@105 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:31.926 19:37:13 -- common/autotest_common.sh@107 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:31.926 19:37:13 -- common/autotest_common.sh@109 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:31.926 19:37:13 -- common/autotest_common.sh@111 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:31.926 19:37:13 -- common/autotest_common.sh@113 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:31.926 19:37:13 -- common/autotest_common.sh@115 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:31.926 19:37:13 -- common/autotest_common.sh@117 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:31.926 19:37:13 -- common/autotest_common.sh@119 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:31.926 19:37:13 -- common/autotest_common.sh@121 -- # : 1 00:06:31.926 19:37:13 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:31.926 19:37:13 -- common/autotest_common.sh@123 -- # : 00:06:31.926 19:37:13 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:31.926 19:37:13 -- common/autotest_common.sh@125 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:31.926 19:37:13 -- common/autotest_common.sh@127 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:31.926 19:37:13 -- common/autotest_common.sh@129 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:31.926 19:37:13 -- common/autotest_common.sh@131 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:31.926 19:37:13 -- common/autotest_common.sh@133 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:31.926 19:37:13 -- common/autotest_common.sh@135 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:31.926 19:37:13 -- common/autotest_common.sh@137 -- # : 00:06:31.926 19:37:13 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:31.926 19:37:13 -- common/autotest_common.sh@139 -- # : true 00:06:31.926 19:37:13 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:31.926 19:37:13 -- common/autotest_common.sh@141 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:31.926 19:37:13 -- common/autotest_common.sh@143 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:31.926 19:37:13 -- common/autotest_common.sh@145 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:31.926 19:37:13 -- common/autotest_common.sh@147 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:31.926 19:37:13 -- common/autotest_common.sh@149 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:31.926 19:37:13 -- common/autotest_common.sh@151 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:31.926 19:37:13 -- common/autotest_common.sh@153 -- # : e810 00:06:31.926 19:37:13 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:31.926 19:37:13 -- common/autotest_common.sh@155 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:31.926 19:37:13 -- common/autotest_common.sh@157 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:31.926 19:37:13 -- common/autotest_common.sh@159 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:31.926 19:37:13 -- common/autotest_common.sh@161 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:31.926 19:37:13 -- common/autotest_common.sh@163 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:31.926 19:37:13 -- common/autotest_common.sh@166 -- # : 00:06:31.926 19:37:13 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:31.926 19:37:13 -- common/autotest_common.sh@168 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:31.926 19:37:13 -- common/autotest_common.sh@170 -- # : 0 00:06:31.926 19:37:13 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:31.926 19:37:13 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:31.926 19:37:13 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:31.926 19:37:13 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:31.926 19:37:13 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:31.926 19:37:13 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:31.926 19:37:13 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:31.926 19:37:13 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:31.926 19:37:13 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:31.926 19:37:13 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:31.926 19:37:13 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:31.926 19:37:13 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:31.926 19:37:13 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:31.926 19:37:13 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:31.926 19:37:13 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:31.926 19:37:13 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:31.926 19:37:13 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:31.926 19:37:13 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:31.926 19:37:13 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:31.926 19:37:13 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:31.926 19:37:13 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:31.926 19:37:13 -- common/autotest_common.sh@199 -- # cat 00:06:31.926 19:37:13 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:31.926 19:37:13 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:31.926 19:37:13 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:31.926 19:37:13 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:31.926 19:37:13 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:31.926 19:37:13 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:31.926 19:37:13 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:31.926 19:37:13 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:31.926 19:37:13 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:31.926 19:37:13 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:31.926 19:37:13 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:31.926 19:37:13 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:31.926 19:37:13 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:31.926 19:37:13 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:31.926 19:37:13 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:31.926 19:37:13 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:31.926 19:37:13 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:31.926 19:37:13 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:31.926 19:37:13 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:31.926 19:37:13 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:31.926 19:37:13 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:31.926 19:37:13 -- common/autotest_common.sh@252 -- # valgrind= 00:06:31.926 19:37:13 -- common/autotest_common.sh@258 -- # uname -s 00:06:31.926 19:37:13 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:31.926 19:37:13 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:31.926 19:37:13 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:31.926 19:37:13 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:31.926 19:37:13 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:31.926 19:37:13 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:31.926 19:37:13 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:31.926 19:37:13 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j48 00:06:31.926 19:37:13 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:31.926 19:37:13 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:31.926 19:37:13 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:31.926 19:37:13 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:31.926 19:37:13 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:31.926 19:37:13 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:31.926 19:37:13 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:31.926 19:37:13 -- common/autotest_common.sh@307 -- # [[ -z 1600116 ]] 00:06:31.926 19:37:13 -- common/autotest_common.sh@307 -- # kill -0 1600116 00:06:31.926 19:37:13 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:31.926 19:37:13 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:31.926 19:37:13 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:31.926 19:37:13 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:31.926 19:37:13 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:31.926 19:37:13 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:31.926 19:37:13 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:31.926 19:37:13 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:32.187 19:37:13 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.QPbYIX 00:06:32.187 19:37:13 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:32.187 19:37:13 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:32.187 19:37:13 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:32.187 19:37:13 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QPbYIX/tests/target /tmp/spdk.QPbYIX 00:06:32.187 19:37:13 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:32.187 19:37:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.187 19:37:13 -- common/autotest_common.sh@316 -- # df -T 00:06:32.187 19:37:13 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:32.187 19:37:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:32.187 19:37:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:32.187 19:37:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:06:32.187 19:37:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=48134467584 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61994708992 00:06:32.187 19:37:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=13860241408 00:06:32.187 19:37:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=30992642048 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997352448 00:06:32.187 19:37:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=4710400 00:06:32.187 19:37:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=12390178816 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12398944256 00:06:32.187 19:37:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=8765440 00:06:32.187 19:37:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=30996582400 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30997356544 00:06:32.187 19:37:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=774144 00:06:32.187 19:37:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # avails["$mount"]=6199463936 00:06:32.187 19:37:13 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6199468032 00:06:32.187 19:37:13 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:32.187 19:37:13 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.187 19:37:13 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:32.187 * Looking for test storage... 00:06:32.187 19:37:13 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:32.187 19:37:13 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:32.187 19:37:13 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.187 19:37:13 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:32.187 19:37:13 -- common/autotest_common.sh@361 -- # mount=/ 00:06:32.187 19:37:13 -- common/autotest_common.sh@363 -- # target_space=48134467584 00:06:32.187 19:37:13 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:32.187 19:37:13 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:32.187 19:37:13 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:32.187 19:37:13 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:32.187 19:37:13 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:32.187 19:37:13 -- common/autotest_common.sh@370 -- # new_size=16074833920 00:06:32.187 19:37:13 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:32.188 19:37:13 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.188 19:37:13 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.188 19:37:13 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.188 19:37:13 -- common/autotest_common.sh@378 -- # return 0 00:06:32.188 19:37:13 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:32.188 19:37:13 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:32.188 19:37:13 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:32.188 19:37:13 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:32.188 19:37:13 -- common/autotest_common.sh@1673 -- # true 00:06:32.188 19:37:13 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:32.188 19:37:13 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:32.188 19:37:13 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:32.188 19:37:13 -- common/autotest_common.sh@27 -- # exec 00:06:32.188 19:37:13 -- common/autotest_common.sh@29 -- # exec 00:06:32.188 19:37:13 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:32.188 19:37:13 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:32.188 19:37:13 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:32.188 19:37:13 -- common/autotest_common.sh@18 -- # set -x 00:06:32.188 19:37:13 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.188 19:37:13 -- nvmf/common.sh@7 -- # uname -s 00:06:32.188 19:37:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.188 19:37:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.188 19:37:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.188 19:37:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.188 19:37:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.188 19:37:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.188 19:37:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.188 19:37:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.188 19:37:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.188 19:37:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.188 19:37:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.188 19:37:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.188 19:37:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.188 19:37:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.188 19:37:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.188 19:37:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.188 19:37:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.188 19:37:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.188 19:37:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.188 19:37:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.188 19:37:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.188 19:37:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.188 19:37:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.188 19:37:13 -- paths/export.sh@5 -- # export PATH 00:06:32.188 19:37:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.188 19:37:13 -- nvmf/common.sh@47 -- # : 0 00:06:32.188 19:37:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.188 19:37:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.188 19:37:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.188 19:37:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.188 19:37:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.188 19:37:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.188 19:37:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.188 19:37:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.188 19:37:13 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:32.188 19:37:13 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:32.188 19:37:13 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:32.188 19:37:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:32.188 19:37:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.188 19:37:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:32.188 19:37:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:32.188 19:37:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:32.188 19:37:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.188 19:37:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.188 19:37:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.188 19:37:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:32.188 19:37:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:32.188 19:37:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:32.188 19:37:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.160 19:37:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:34.160 19:37:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:34.160 19:37:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:34.160 19:37:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:34.160 19:37:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:34.160 19:37:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:34.160 19:37:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:34.160 19:37:15 -- nvmf/common.sh@295 -- # net_devs=() 00:06:34.160 19:37:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:34.160 19:37:15 -- nvmf/common.sh@296 -- # e810=() 00:06:34.160 19:37:15 -- nvmf/common.sh@296 -- # local -ga e810 00:06:34.160 19:37:15 -- nvmf/common.sh@297 -- # x722=() 00:06:34.160 19:37:15 -- nvmf/common.sh@297 -- # local -ga x722 00:06:34.160 19:37:15 -- nvmf/common.sh@298 -- # mlx=() 00:06:34.160 19:37:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:34.160 19:37:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.160 19:37:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:34.160 19:37:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:34.160 19:37:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:34.160 19:37:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.160 19:37:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:34.160 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:34.160 19:37:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:34.160 19:37:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:34.160 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:34.160 19:37:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:34.160 19:37:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.160 19:37:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.160 19:37:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:34.160 19:37:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.160 19:37:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:34.160 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:34.160 19:37:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.160 19:37:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:34.160 19:37:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.160 19:37:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:34.160 19:37:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.160 19:37:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:34.160 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:34.160 19:37:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.160 19:37:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:34.160 19:37:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:34.160 19:37:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:34.160 19:37:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.160 19:37:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.160 19:37:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.160 19:37:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:34.160 19:37:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.160 19:37:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.160 19:37:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:34.160 19:37:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.160 19:37:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.160 19:37:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:34.160 19:37:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:34.160 19:37:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.160 19:37:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.160 19:37:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.160 19:37:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.160 19:37:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:34.160 19:37:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.160 19:37:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.160 19:37:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.160 19:37:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:34.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:06:34.160 00:06:34.160 --- 10.0.0.2 ping statistics --- 00:06:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.160 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:06:34.160 19:37:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:06:34.160 00:06:34.160 --- 10.0.0.1 ping statistics --- 00:06:34.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.160 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:06:34.160 19:37:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.160 19:37:15 -- nvmf/common.sh@411 -- # return 0 00:06:34.160 19:37:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:34.160 19:37:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.160 19:37:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:34.160 19:37:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.160 19:37:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:34.160 19:37:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:34.160 19:37:15 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:34.160 19:37:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:34.160 19:37:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.160 19:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:34.160 ************************************ 00:06:34.160 START TEST nvmf_filesystem_no_in_capsule 00:06:34.160 ************************************ 00:06:34.160 19:37:15 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:34.160 19:37:15 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:34.160 19:37:15 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:34.160 19:37:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:34.160 19:37:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:34.160 19:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:34.160 19:37:15 -- nvmf/common.sh@470 -- # nvmfpid=1601749 00:06:34.160 19:37:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:34.160 19:37:15 -- nvmf/common.sh@471 -- # waitforlisten 1601749 00:06:34.160 19:37:15 -- common/autotest_common.sh@817 -- # '[' -z 1601749 ']' 00:06:34.160 19:37:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.160 19:37:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:34.160 19:37:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.160 19:37:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:34.160 19:37:15 -- common/autotest_common.sh@10 -- # set +x 00:06:34.431 [2024-04-24 19:37:15.679931] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:34.431 [2024-04-24 19:37:15.680027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.431 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.431 [2024-04-24 19:37:15.746779] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.431 [2024-04-24 19:37:15.868726] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.431 [2024-04-24 19:37:15.868792] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.431 [2024-04-24 19:37:15.868818] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.431 [2024-04-24 19:37:15.868832] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.431 [2024-04-24 19:37:15.868844] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.431 [2024-04-24 19:37:15.868923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.431 [2024-04-24 19:37:15.868975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.431 [2024-04-24 19:37:15.869028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.431 [2024-04-24 19:37:15.869031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.370 19:37:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:35.370 19:37:16 -- common/autotest_common.sh@850 -- # return 0 00:06:35.370 19:37:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:35.370 19:37:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:35.370 19:37:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.370 19:37:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.370 19:37:16 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:35.370 19:37:16 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:35.370 19:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.370 19:37:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.370 [2024-04-24 19:37:16.693866] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.370 19:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.370 19:37:16 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:35.370 19:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.370 19:37:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.370 Malloc1 00:06:35.370 19:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.370 19:37:16 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:35.370 19:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.370 19:37:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.370 19:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.370 19:37:16 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:35.370 19:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.370 19:37:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.370 19:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.370 19:37:16 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.370 19:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.370 19:37:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.370 [2024-04-24 19:37:16.882147] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.630 19:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.630 19:37:16 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:35.630 19:37:16 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:35.630 19:37:16 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:35.630 19:37:16 -- common/autotest_common.sh@1366 -- # local bs 00:06:35.630 19:37:16 -- common/autotest_common.sh@1367 -- # local nb 00:06:35.630 19:37:16 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:35.630 19:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:35.630 19:37:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.630 19:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:35.630 19:37:16 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:35.630 { 00:06:35.630 "name": "Malloc1", 00:06:35.630 "aliases": [ 00:06:35.630 "20b9412d-d44c-48fa-a46a-ff121a06ddf3" 00:06:35.630 ], 00:06:35.630 "product_name": "Malloc disk", 00:06:35.630 "block_size": 512, 00:06:35.630 "num_blocks": 1048576, 00:06:35.630 "uuid": "20b9412d-d44c-48fa-a46a-ff121a06ddf3", 00:06:35.630 "assigned_rate_limits": { 00:06:35.630 "rw_ios_per_sec": 0, 00:06:35.630 "rw_mbytes_per_sec": 0, 00:06:35.630 "r_mbytes_per_sec": 0, 00:06:35.630 "w_mbytes_per_sec": 0 00:06:35.630 }, 00:06:35.630 "claimed": true, 00:06:35.630 "claim_type": "exclusive_write", 00:06:35.630 "zoned": false, 00:06:35.630 "supported_io_types": { 00:06:35.630 "read": true, 00:06:35.630 "write": true, 00:06:35.630 "unmap": true, 00:06:35.630 "write_zeroes": true, 00:06:35.630 "flush": true, 00:06:35.630 "reset": true, 00:06:35.630 "compare": false, 00:06:35.630 "compare_and_write": false, 00:06:35.630 "abort": true, 00:06:35.630 "nvme_admin": false, 00:06:35.630 "nvme_io": false 00:06:35.630 }, 00:06:35.630 "memory_domains": [ 00:06:35.630 { 00:06:35.630 "dma_device_id": "system", 00:06:35.630 "dma_device_type": 1 00:06:35.630 }, 00:06:35.630 { 00:06:35.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.630 "dma_device_type": 2 00:06:35.630 } 00:06:35.630 ], 00:06:35.630 "driver_specific": {} 00:06:35.630 } 00:06:35.630 ]' 00:06:35.630 19:37:16 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:35.630 19:37:16 -- common/autotest_common.sh@1369 -- # bs=512 00:06:35.630 19:37:16 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:35.630 19:37:16 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:35.630 19:37:16 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:35.630 19:37:16 -- common/autotest_common.sh@1374 -- # echo 512 00:06:35.630 19:37:16 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:35.630 19:37:16 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:36.199 19:37:17 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:36.199 19:37:17 -- common/autotest_common.sh@1184 -- # local i=0 00:06:36.199 19:37:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:36.199 19:37:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:36.199 19:37:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:38.109 19:37:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:38.109 19:37:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:38.109 19:37:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:38.109 19:37:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:38.109 19:37:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:38.109 19:37:19 -- common/autotest_common.sh@1194 -- # return 0 00:06:38.109 19:37:19 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:38.109 19:37:19 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:38.109 19:37:19 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:38.109 19:37:19 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:38.109 19:37:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:38.109 19:37:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:38.109 19:37:19 -- setup/common.sh@80 -- # echo 536870912 00:06:38.109 19:37:19 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:38.109 19:37:19 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:38.109 19:37:19 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:38.109 19:37:19 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:38.679 19:37:19 -- target/filesystem.sh@69 -- # partprobe 00:06:39.619 19:37:20 -- target/filesystem.sh@70 -- # sleep 1 00:06:40.556 19:37:21 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:40.556 19:37:21 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:40.556 19:37:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:40.556 19:37:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.556 19:37:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.556 ************************************ 00:06:40.556 START TEST filesystem_ext4 00:06:40.556 ************************************ 00:06:40.556 19:37:21 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:40.556 19:37:21 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:40.556 19:37:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:40.556 19:37:21 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:40.556 19:37:21 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:40.556 19:37:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:40.556 19:37:21 -- common/autotest_common.sh@914 -- # local i=0 00:06:40.556 19:37:21 -- common/autotest_common.sh@915 -- # local force 00:06:40.556 19:37:21 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:40.556 19:37:21 -- common/autotest_common.sh@918 -- # force=-F 00:06:40.556 19:37:21 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:40.556 mke2fs 1.46.5 (30-Dec-2021) 00:06:40.556 Discarding device blocks: 0/522240 done 00:06:40.816 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:40.816 Filesystem UUID: 63c6cca0-53fc-4a87-a8ea-fd872f2a707a 00:06:40.816 Superblock backups stored on blocks: 00:06:40.816 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:40.816 00:06:40.816 Allocating group tables: 0/64 done 00:06:40.816 Writing inode tables: 0/64 done 00:06:42.195 Creating journal (8192 blocks): done 00:06:42.195 Writing superblocks and filesystem accounting information: 0/64 done 00:06:42.195 00:06:42.195 19:37:23 -- common/autotest_common.sh@931 -- # return 0 00:06:42.195 19:37:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:43.137 19:37:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:43.137 19:37:24 -- target/filesystem.sh@25 -- # sync 00:06:43.137 19:37:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:43.137 19:37:24 -- target/filesystem.sh@27 -- # sync 00:06:43.137 19:37:24 -- target/filesystem.sh@29 -- # i=0 00:06:43.137 19:37:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:43.137 19:37:24 -- target/filesystem.sh@37 -- # kill -0 1601749 00:06:43.137 19:37:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:43.137 19:37:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:43.137 19:37:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:43.137 19:37:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:43.137 00:06:43.137 real 0m2.528s 00:06:43.137 user 0m0.019s 00:06:43.137 sys 0m0.056s 00:06:43.137 19:37:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.137 19:37:24 -- common/autotest_common.sh@10 -- # set +x 00:06:43.137 ************************************ 00:06:43.137 END TEST filesystem_ext4 00:06:43.137 ************************************ 00:06:43.137 19:37:24 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:43.137 19:37:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:43.137 19:37:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.137 19:37:24 -- common/autotest_common.sh@10 -- # set +x 00:06:43.137 ************************************ 00:06:43.137 START TEST filesystem_btrfs 00:06:43.137 ************************************ 00:06:43.137 19:37:24 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:43.137 19:37:24 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:43.137 19:37:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:43.137 19:37:24 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:43.137 19:37:24 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:43.137 19:37:24 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:43.137 19:37:24 -- common/autotest_common.sh@914 -- # local i=0 00:06:43.137 19:37:24 -- common/autotest_common.sh@915 -- # local force 00:06:43.137 19:37:24 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:43.137 19:37:24 -- common/autotest_common.sh@920 -- # force=-f 00:06:43.137 19:37:24 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:43.706 btrfs-progs v6.6.2 00:06:43.706 See https://btrfs.readthedocs.io for more information. 00:06:43.706 00:06:43.706 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:43.706 NOTE: several default settings have changed in version 5.15, please make sure 00:06:43.706 this does not affect your deployments: 00:06:43.706 - DUP for metadata (-m dup) 00:06:43.706 - enabled no-holes (-O no-holes) 00:06:43.706 - enabled free-space-tree (-R free-space-tree) 00:06:43.706 00:06:43.706 Label: (null) 00:06:43.706 UUID: 0cc61eca-cb0b-4ffa-b26a-44ece21db2ec 00:06:43.706 Node size: 16384 00:06:43.706 Sector size: 4096 00:06:43.706 Filesystem size: 510.00MiB 00:06:43.706 Block group profiles: 00:06:43.706 Data: single 8.00MiB 00:06:43.706 Metadata: DUP 32.00MiB 00:06:43.706 System: DUP 8.00MiB 00:06:43.706 SSD detected: yes 00:06:43.706 Zoned device: no 00:06:43.706 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:43.706 Runtime features: free-space-tree 00:06:43.706 Checksum: crc32c 00:06:43.706 Number of devices: 1 00:06:43.706 Devices: 00:06:43.706 ID SIZE PATH 00:06:43.706 1 510.00MiB /dev/nvme0n1p1 00:06:43.706 00:06:43.706 19:37:25 -- common/autotest_common.sh@931 -- # return 0 00:06:43.706 19:37:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:43.965 19:37:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:43.965 19:37:25 -- target/filesystem.sh@25 -- # sync 00:06:43.965 19:37:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:43.965 19:37:25 -- target/filesystem.sh@27 -- # sync 00:06:43.965 19:37:25 -- target/filesystem.sh@29 -- # i=0 00:06:43.965 19:37:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:43.965 19:37:25 -- target/filesystem.sh@37 -- # kill -0 1601749 00:06:43.965 19:37:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:43.965 19:37:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:43.965 19:37:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:43.965 19:37:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:43.965 00:06:43.965 real 0m0.840s 00:06:43.965 user 0m0.036s 00:06:43.965 sys 0m0.102s 00:06:43.965 19:37:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.965 19:37:25 -- common/autotest_common.sh@10 -- # set +x 00:06:43.965 ************************************ 00:06:43.965 END TEST filesystem_btrfs 00:06:43.965 ************************************ 00:06:43.965 19:37:25 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:43.965 19:37:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:43.965 19:37:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.965 19:37:25 -- common/autotest_common.sh@10 -- # set +x 00:06:44.225 ************************************ 00:06:44.225 START TEST filesystem_xfs 00:06:44.225 ************************************ 00:06:44.225 19:37:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:44.225 19:37:25 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:44.225 19:37:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:44.225 19:37:25 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:44.225 19:37:25 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:44.225 19:37:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:44.225 19:37:25 -- common/autotest_common.sh@914 -- # local i=0 00:06:44.225 19:37:25 -- common/autotest_common.sh@915 -- # local force 00:06:44.225 19:37:25 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:44.225 19:37:25 -- common/autotest_common.sh@920 -- # force=-f 00:06:44.225 19:37:25 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:44.225 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:44.225 = sectsz=512 attr=2, projid32bit=1 00:06:44.225 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:44.225 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:44.225 data = bsize=4096 blocks=130560, imaxpct=25 00:06:44.225 = sunit=0 swidth=0 blks 00:06:44.225 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:44.225 log =internal log bsize=4096 blocks=16384, version=2 00:06:44.225 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:44.225 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:45.164 Discarding blocks...Done. 00:06:45.164 19:37:26 -- common/autotest_common.sh@931 -- # return 0 00:06:45.164 19:37:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:47.704 19:37:29 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:47.964 19:37:29 -- target/filesystem.sh@25 -- # sync 00:06:47.964 19:37:29 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:47.964 19:37:29 -- target/filesystem.sh@27 -- # sync 00:06:47.964 19:37:29 -- target/filesystem.sh@29 -- # i=0 00:06:47.964 19:37:29 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:47.964 19:37:29 -- target/filesystem.sh@37 -- # kill -0 1601749 00:06:47.964 19:37:29 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:47.964 19:37:29 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:47.964 19:37:29 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:47.964 19:37:29 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:47.964 00:06:47.964 real 0m3.711s 00:06:47.964 user 0m0.010s 00:06:47.964 sys 0m0.071s 00:06:47.964 19:37:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.964 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:06:47.964 ************************************ 00:06:47.964 END TEST filesystem_xfs 00:06:47.964 ************************************ 00:06:47.964 19:37:29 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:48.222 19:37:29 -- target/filesystem.sh@93 -- # sync 00:06:48.222 19:37:29 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:48.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:48.222 19:37:29 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:48.222 19:37:29 -- common/autotest_common.sh@1205 -- # local i=0 00:06:48.222 19:37:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:48.222 19:37:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:48.222 19:37:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:48.222 19:37:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:48.222 19:37:29 -- common/autotest_common.sh@1217 -- # return 0 00:06:48.222 19:37:29 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:48.222 19:37:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:48.222 19:37:29 -- common/autotest_common.sh@10 -- # set +x 00:06:48.222 19:37:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:48.222 19:37:29 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:48.223 19:37:29 -- target/filesystem.sh@101 -- # killprocess 1601749 00:06:48.223 19:37:29 -- common/autotest_common.sh@936 -- # '[' -z 1601749 ']' 00:06:48.223 19:37:29 -- common/autotest_common.sh@940 -- # kill -0 1601749 00:06:48.223 19:37:29 -- common/autotest_common.sh@941 -- # uname 00:06:48.223 19:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:48.223 19:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1601749 00:06:48.480 19:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:48.480 19:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:48.480 19:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1601749' 00:06:48.480 killing process with pid 1601749 00:06:48.480 19:37:29 -- common/autotest_common.sh@955 -- # kill 1601749 00:06:48.480 19:37:29 -- common/autotest_common.sh@960 -- # wait 1601749 00:06:48.738 19:37:30 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:48.738 00:06:48.738 real 0m14.598s 00:06:48.738 user 0m56.325s 00:06:48.738 sys 0m2.166s 00:06:48.738 19:37:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.738 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:48.738 ************************************ 00:06:48.738 END TEST nvmf_filesystem_no_in_capsule 00:06:48.738 ************************************ 00:06:48.996 19:37:30 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:48.996 19:37:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:48.996 19:37:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.996 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:48.996 ************************************ 00:06:48.996 START TEST nvmf_filesystem_in_capsule 00:06:48.996 ************************************ 00:06:48.996 19:37:30 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:06:48.996 19:37:30 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:48.996 19:37:30 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:48.996 19:37:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:48.996 19:37:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:48.996 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:48.996 19:37:30 -- nvmf/common.sh@470 -- # nvmfpid=1603621 00:06:48.996 19:37:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:48.996 19:37:30 -- nvmf/common.sh@471 -- # waitforlisten 1603621 00:06:48.996 19:37:30 -- common/autotest_common.sh@817 -- # '[' -z 1603621 ']' 00:06:48.996 19:37:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.996 19:37:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:48.996 19:37:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.996 19:37:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:48.996 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:48.996 [2024-04-24 19:37:30.420824] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:06:48.996 [2024-04-24 19:37:30.420922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.996 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.996 [2024-04-24 19:37:30.499937] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.254 [2024-04-24 19:37:30.625330] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.254 [2024-04-24 19:37:30.625396] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.254 [2024-04-24 19:37:30.625413] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.254 [2024-04-24 19:37:30.625427] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.254 [2024-04-24 19:37:30.625440] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.254 [2024-04-24 19:37:30.625502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.254 [2024-04-24 19:37:30.625559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.254 [2024-04-24 19:37:30.625584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.254 [2024-04-24 19:37:30.625587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.254 19:37:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:49.254 19:37:30 -- common/autotest_common.sh@850 -- # return 0 00:06:49.254 19:37:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:49.254 19:37:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:49.254 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.512 19:37:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.512 19:37:30 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:49.512 19:37:30 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:49.512 19:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.512 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.512 [2024-04-24 19:37:30.783592] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.512 19:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.512 19:37:30 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:49.512 19:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.512 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.512 Malloc1 00:06:49.512 19:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.512 19:37:30 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:49.512 19:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.512 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.512 19:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.512 19:37:30 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:49.512 19:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.512 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.512 19:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.512 19:37:30 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.512 19:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.512 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.512 [2024-04-24 19:37:30.972124] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.512 19:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.512 19:37:30 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:49.512 19:37:30 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:49.512 19:37:30 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:49.512 19:37:30 -- common/autotest_common.sh@1366 -- # local bs 00:06:49.513 19:37:30 -- common/autotest_common.sh@1367 -- # local nb 00:06:49.513 19:37:30 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:49.513 19:37:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.513 19:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.513 19:37:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.513 19:37:30 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:49.513 { 00:06:49.513 "name": "Malloc1", 00:06:49.513 "aliases": [ 00:06:49.513 "97b304aa-26a9-45ee-b45b-135e155bb559" 00:06:49.513 ], 00:06:49.513 "product_name": "Malloc disk", 00:06:49.513 "block_size": 512, 00:06:49.513 "num_blocks": 1048576, 00:06:49.513 "uuid": "97b304aa-26a9-45ee-b45b-135e155bb559", 00:06:49.513 "assigned_rate_limits": { 00:06:49.513 "rw_ios_per_sec": 0, 00:06:49.513 "rw_mbytes_per_sec": 0, 00:06:49.513 "r_mbytes_per_sec": 0, 00:06:49.513 "w_mbytes_per_sec": 0 00:06:49.513 }, 00:06:49.513 "claimed": true, 00:06:49.513 "claim_type": "exclusive_write", 00:06:49.513 "zoned": false, 00:06:49.513 "supported_io_types": { 00:06:49.513 "read": true, 00:06:49.513 "write": true, 00:06:49.513 "unmap": true, 00:06:49.513 "write_zeroes": true, 00:06:49.513 "flush": true, 00:06:49.513 "reset": true, 00:06:49.513 "compare": false, 00:06:49.513 "compare_and_write": false, 00:06:49.513 "abort": true, 00:06:49.513 "nvme_admin": false, 00:06:49.513 "nvme_io": false 00:06:49.513 }, 00:06:49.513 "memory_domains": [ 00:06:49.513 { 00:06:49.513 "dma_device_id": "system", 00:06:49.513 "dma_device_type": 1 00:06:49.513 }, 00:06:49.513 { 00:06:49.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:49.513 "dma_device_type": 2 00:06:49.513 } 00:06:49.513 ], 00:06:49.513 "driver_specific": {} 00:06:49.513 } 00:06:49.513 ]' 00:06:49.513 19:37:30 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:49.770 19:37:31 -- common/autotest_common.sh@1369 -- # bs=512 00:06:49.770 19:37:31 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:49.770 19:37:31 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:49.770 19:37:31 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:49.770 19:37:31 -- common/autotest_common.sh@1374 -- # echo 512 00:06:49.770 19:37:31 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:49.770 19:37:31 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:50.339 19:37:31 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:50.339 19:37:31 -- common/autotest_common.sh@1184 -- # local i=0 00:06:50.339 19:37:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:50.339 19:37:31 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:50.339 19:37:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:52.876 19:37:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:52.876 19:37:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:52.876 19:37:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:52.876 19:37:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:52.876 19:37:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:52.876 19:37:33 -- common/autotest_common.sh@1194 -- # return 0 00:06:52.876 19:37:33 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:52.876 19:37:33 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:52.876 19:37:33 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:52.876 19:37:33 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:52.876 19:37:33 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:52.876 19:37:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:52.876 19:37:33 -- setup/common.sh@80 -- # echo 536870912 00:06:52.876 19:37:33 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:52.876 19:37:33 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:52.876 19:37:33 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:52.876 19:37:33 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:52.876 19:37:33 -- target/filesystem.sh@69 -- # partprobe 00:06:53.134 19:37:34 -- target/filesystem.sh@70 -- # sleep 1 00:06:54.536 19:37:35 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:54.536 19:37:35 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:54.536 19:37:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:54.536 19:37:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.536 19:37:35 -- common/autotest_common.sh@10 -- # set +x 00:06:54.536 ************************************ 00:06:54.536 START TEST filesystem_in_capsule_ext4 00:06:54.536 ************************************ 00:06:54.536 19:37:35 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:54.536 19:37:35 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:54.536 19:37:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:54.536 19:37:35 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:54.536 19:37:35 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:54.536 19:37:35 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:54.536 19:37:35 -- common/autotest_common.sh@914 -- # local i=0 00:06:54.536 19:37:35 -- common/autotest_common.sh@915 -- # local force 00:06:54.536 19:37:35 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:54.536 19:37:35 -- common/autotest_common.sh@918 -- # force=-F 00:06:54.536 19:37:35 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:54.536 mke2fs 1.46.5 (30-Dec-2021) 00:06:54.537 Discarding device blocks: 0/522240 done 00:06:54.537 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:54.537 Filesystem UUID: af650435-ddf5-42be-877b-234988ae8f83 00:06:54.537 Superblock backups stored on blocks: 00:06:54.537 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:54.537 00:06:54.537 Allocating group tables: 0/64 done 00:06:54.537 Writing inode tables: 0/64 done 00:06:55.103 Creating journal (8192 blocks): done 00:06:55.362 Writing superblocks and filesystem accounting information: 0/64 done 00:06:55.362 00:06:55.362 19:37:36 -- common/autotest_common.sh@931 -- # return 0 00:06:55.362 19:37:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:55.362 19:37:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:55.362 19:37:36 -- target/filesystem.sh@25 -- # sync 00:06:55.362 19:37:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:55.362 19:37:36 -- target/filesystem.sh@27 -- # sync 00:06:55.362 19:37:36 -- target/filesystem.sh@29 -- # i=0 00:06:55.362 19:37:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:55.362 19:37:36 -- target/filesystem.sh@37 -- # kill -0 1603621 00:06:55.362 19:37:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:55.362 19:37:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:55.362 19:37:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:55.362 19:37:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:55.362 00:06:55.362 real 0m1.098s 00:06:55.362 user 0m0.016s 00:06:55.362 sys 0m0.065s 00:06:55.362 19:37:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.362 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:06:55.362 ************************************ 00:06:55.362 END TEST filesystem_in_capsule_ext4 00:06:55.362 ************************************ 00:06:55.362 19:37:36 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:55.362 19:37:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:55.362 19:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.362 19:37:36 -- common/autotest_common.sh@10 -- # set +x 00:06:55.621 ************************************ 00:06:55.621 START TEST filesystem_in_capsule_btrfs 00:06:55.621 ************************************ 00:06:55.621 19:37:36 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:55.621 19:37:36 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:55.621 19:37:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:55.621 19:37:36 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:55.621 19:37:36 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:55.621 19:37:36 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:55.621 19:37:36 -- common/autotest_common.sh@914 -- # local i=0 00:06:55.621 19:37:36 -- common/autotest_common.sh@915 -- # local force 00:06:55.621 19:37:36 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:55.621 19:37:36 -- common/autotest_common.sh@920 -- # force=-f 00:06:55.621 19:37:36 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:55.879 btrfs-progs v6.6.2 00:06:55.879 See https://btrfs.readthedocs.io for more information. 00:06:55.879 00:06:55.879 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:55.879 NOTE: several default settings have changed in version 5.15, please make sure 00:06:55.879 this does not affect your deployments: 00:06:55.879 - DUP for metadata (-m dup) 00:06:55.879 - enabled no-holes (-O no-holes) 00:06:55.879 - enabled free-space-tree (-R free-space-tree) 00:06:55.879 00:06:55.879 Label: (null) 00:06:55.879 UUID: de074691-34b0-4a6f-9042-c8bcbb0b8898 00:06:55.879 Node size: 16384 00:06:55.879 Sector size: 4096 00:06:55.879 Filesystem size: 510.00MiB 00:06:55.879 Block group profiles: 00:06:55.879 Data: single 8.00MiB 00:06:55.879 Metadata: DUP 32.00MiB 00:06:55.879 System: DUP 8.00MiB 00:06:55.879 SSD detected: yes 00:06:55.879 Zoned device: no 00:06:55.879 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:55.879 Runtime features: free-space-tree 00:06:55.879 Checksum: crc32c 00:06:55.879 Number of devices: 1 00:06:55.879 Devices: 00:06:55.879 ID SIZE PATH 00:06:55.879 1 510.00MiB /dev/nvme0n1p1 00:06:55.879 00:06:55.879 19:37:37 -- common/autotest_common.sh@931 -- # return 0 00:06:55.879 19:37:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:56.446 19:37:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:56.446 19:37:37 -- target/filesystem.sh@25 -- # sync 00:06:56.446 19:37:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:56.446 19:37:37 -- target/filesystem.sh@27 -- # sync 00:06:56.446 19:37:37 -- target/filesystem.sh@29 -- # i=0 00:06:56.446 19:37:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:56.446 19:37:37 -- target/filesystem.sh@37 -- # kill -0 1603621 00:06:56.446 19:37:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:56.446 19:37:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:56.446 19:37:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:56.446 19:37:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:56.446 00:06:56.446 real 0m0.981s 00:06:56.446 user 0m0.026s 00:06:56.446 sys 0m0.117s 00:06:56.446 19:37:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:56.446 19:37:37 -- common/autotest_common.sh@10 -- # set +x 00:06:56.446 ************************************ 00:06:56.446 END TEST filesystem_in_capsule_btrfs 00:06:56.446 ************************************ 00:06:56.706 19:37:37 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:56.706 19:37:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:56.706 19:37:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.706 19:37:37 -- common/autotest_common.sh@10 -- # set +x 00:06:56.706 ************************************ 00:06:56.706 START TEST filesystem_in_capsule_xfs 00:06:56.706 ************************************ 00:06:56.706 19:37:38 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:56.706 19:37:38 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:56.706 19:37:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:56.706 19:37:38 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:56.706 19:37:38 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:56.706 19:37:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:56.706 19:37:38 -- common/autotest_common.sh@914 -- # local i=0 00:06:56.706 19:37:38 -- common/autotest_common.sh@915 -- # local force 00:06:56.706 19:37:38 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:56.706 19:37:38 -- common/autotest_common.sh@920 -- # force=-f 00:06:56.706 19:37:38 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:56.706 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:56.706 = sectsz=512 attr=2, projid32bit=1 00:06:56.706 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:56.706 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:56.706 data = bsize=4096 blocks=130560, imaxpct=25 00:06:56.706 = sunit=0 swidth=0 blks 00:06:56.706 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:56.706 log =internal log bsize=4096 blocks=16384, version=2 00:06:56.706 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:56.706 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:58.086 Discarding blocks...Done. 00:06:58.086 19:37:39 -- common/autotest_common.sh@931 -- # return 0 00:06:58.086 19:37:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.001 19:37:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.001 19:37:41 -- target/filesystem.sh@25 -- # sync 00:07:00.001 19:37:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.001 19:37:41 -- target/filesystem.sh@27 -- # sync 00:07:00.001 19:37:41 -- target/filesystem.sh@29 -- # i=0 00:07:00.001 19:37:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.001 19:37:41 -- target/filesystem.sh@37 -- # kill -0 1603621 00:07:00.001 19:37:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.001 19:37:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.001 19:37:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.001 19:37:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.001 00:07:00.001 real 0m3.101s 00:07:00.001 user 0m0.012s 00:07:00.001 sys 0m0.065s 00:07:00.001 19:37:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.001 19:37:41 -- common/autotest_common.sh@10 -- # set +x 00:07:00.001 ************************************ 00:07:00.001 END TEST filesystem_in_capsule_xfs 00:07:00.001 ************************************ 00:07:00.001 19:37:41 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:00.001 19:37:41 -- target/filesystem.sh@93 -- # sync 00:07:00.001 19:37:41 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:00.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:00.261 19:37:41 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:00.261 19:37:41 -- common/autotest_common.sh@1205 -- # local i=0 00:07:00.261 19:37:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:00.261 19:37:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.261 19:37:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:00.261 19:37:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.261 19:37:41 -- common/autotest_common.sh@1217 -- # return 0 00:07:00.261 19:37:41 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:00.261 19:37:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:00.261 19:37:41 -- common/autotest_common.sh@10 -- # set +x 00:07:00.261 19:37:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:00.261 19:37:41 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:00.261 19:37:41 -- target/filesystem.sh@101 -- # killprocess 1603621 00:07:00.261 19:37:41 -- common/autotest_common.sh@936 -- # '[' -z 1603621 ']' 00:07:00.261 19:37:41 -- common/autotest_common.sh@940 -- # kill -0 1603621 00:07:00.261 19:37:41 -- common/autotest_common.sh@941 -- # uname 00:07:00.261 19:37:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:00.261 19:37:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1603621 00:07:00.261 19:37:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:00.261 19:37:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:00.261 19:37:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1603621' 00:07:00.261 killing process with pid 1603621 00:07:00.261 19:37:41 -- common/autotest_common.sh@955 -- # kill 1603621 00:07:00.261 19:37:41 -- common/autotest_common.sh@960 -- # wait 1603621 00:07:00.830 19:37:42 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:00.830 00:07:00.830 real 0m11.758s 00:07:00.830 user 0m44.976s 00:07:00.830 sys 0m1.921s 00:07:00.830 19:37:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.830 19:37:42 -- common/autotest_common.sh@10 -- # set +x 00:07:00.830 ************************************ 00:07:00.830 END TEST nvmf_filesystem_in_capsule 00:07:00.830 ************************************ 00:07:00.830 19:37:42 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:00.830 19:37:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:00.830 19:37:42 -- nvmf/common.sh@117 -- # sync 00:07:00.830 19:37:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:00.830 19:37:42 -- nvmf/common.sh@120 -- # set +e 00:07:00.830 19:37:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:00.830 19:37:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:00.830 rmmod nvme_tcp 00:07:00.830 rmmod nvme_fabrics 00:07:00.830 rmmod nvme_keyring 00:07:00.830 19:37:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.830 19:37:42 -- nvmf/common.sh@124 -- # set -e 00:07:00.830 19:37:42 -- nvmf/common.sh@125 -- # return 0 00:07:00.831 19:37:42 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:00.831 19:37:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:00.831 19:37:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:00.831 19:37:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:00.831 19:37:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.831 19:37:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.831 19:37:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.831 19:37:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.831 19:37:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.738 19:37:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:02.738 00:07:02.738 real 0m30.920s 00:07:02.738 user 1m42.159s 00:07:02.738 sys 0m5.780s 00:07:02.738 19:37:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.738 19:37:44 -- common/autotest_common.sh@10 -- # set +x 00:07:02.738 ************************************ 00:07:02.738 END TEST nvmf_filesystem 00:07:02.738 ************************************ 00:07:02.998 19:37:44 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:02.998 19:37:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:02.998 19:37:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.998 19:37:44 -- common/autotest_common.sh@10 -- # set +x 00:07:02.998 ************************************ 00:07:02.998 START TEST nvmf_discovery 00:07:02.998 ************************************ 00:07:02.998 19:37:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:02.998 * Looking for test storage... 00:07:02.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.998 19:37:44 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.998 19:37:44 -- nvmf/common.sh@7 -- # uname -s 00:07:02.998 19:37:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.998 19:37:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.998 19:37:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.998 19:37:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.998 19:37:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.998 19:37:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.998 19:37:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.998 19:37:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.998 19:37:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.998 19:37:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.998 19:37:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.998 19:37:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.998 19:37:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.998 19:37:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.998 19:37:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.998 19:37:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:02.998 19:37:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.998 19:37:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.998 19:37:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.998 19:37:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.998 19:37:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.998 19:37:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.998 19:37:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.998 19:37:44 -- paths/export.sh@5 -- # export PATH 00:07:02.998 19:37:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.998 19:37:44 -- nvmf/common.sh@47 -- # : 0 00:07:02.998 19:37:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:02.998 19:37:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:02.998 19:37:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:02.998 19:37:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.998 19:37:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.998 19:37:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:02.998 19:37:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:02.998 19:37:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:02.998 19:37:44 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:02.998 19:37:44 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:02.998 19:37:44 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:02.998 19:37:44 -- target/discovery.sh@15 -- # hash nvme 00:07:02.998 19:37:44 -- target/discovery.sh@20 -- # nvmftestinit 00:07:02.998 19:37:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:02.998 19:37:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.998 19:37:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:02.998 19:37:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:02.998 19:37:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:02.998 19:37:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.998 19:37:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.998 19:37:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.998 19:37:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:02.998 19:37:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:02.998 19:37:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:02.998 19:37:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.531 19:37:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:05.531 19:37:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:05.531 19:37:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:05.531 19:37:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:05.531 19:37:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:05.531 19:37:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:05.531 19:37:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:05.531 19:37:46 -- nvmf/common.sh@295 -- # net_devs=() 00:07:05.532 19:37:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:05.532 19:37:46 -- nvmf/common.sh@296 -- # e810=() 00:07:05.532 19:37:46 -- nvmf/common.sh@296 -- # local -ga e810 00:07:05.532 19:37:46 -- nvmf/common.sh@297 -- # x722=() 00:07:05.532 19:37:46 -- nvmf/common.sh@297 -- # local -ga x722 00:07:05.532 19:37:46 -- nvmf/common.sh@298 -- # mlx=() 00:07:05.532 19:37:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:05.532 19:37:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.532 19:37:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:05.532 19:37:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:05.532 19:37:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:05.532 19:37:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.532 19:37:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:05.532 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:05.532 19:37:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.532 19:37:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:05.532 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:05.532 19:37:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:05.532 19:37:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.532 19:37:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.532 19:37:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:05.532 19:37:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.532 19:37:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:05.532 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:05.532 19:37:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.532 19:37:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.532 19:37:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.532 19:37:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:05.532 19:37:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.532 19:37:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:05.532 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:05.532 19:37:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.532 19:37:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:05.532 19:37:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:05.532 19:37:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:05.532 19:37:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.532 19:37:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.532 19:37:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.532 19:37:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:05.532 19:37:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.532 19:37:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.532 19:37:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:05.532 19:37:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.532 19:37:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.532 19:37:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:05.532 19:37:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:05.532 19:37:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.532 19:37:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.532 19:37:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.532 19:37:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.532 19:37:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:05.532 19:37:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.532 19:37:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.532 19:37:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.532 19:37:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:05.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:07:05.532 00:07:05.532 --- 10.0.0.2 ping statistics --- 00:07:05.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.532 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:05.532 19:37:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:07:05.532 00:07:05.532 --- 10.0.0.1 ping statistics --- 00:07:05.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.532 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:05.532 19:37:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.532 19:37:46 -- nvmf/common.sh@411 -- # return 0 00:07:05.532 19:37:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:05.532 19:37:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.532 19:37:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:05.532 19:37:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.532 19:37:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:05.532 19:37:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:05.532 19:37:46 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:05.532 19:37:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:05.532 19:37:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:05.532 19:37:46 -- common/autotest_common.sh@10 -- # set +x 00:07:05.532 19:37:46 -- nvmf/common.sh@470 -- # nvmfpid=1607241 00:07:05.532 19:37:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:05.532 19:37:46 -- nvmf/common.sh@471 -- # waitforlisten 1607241 00:07:05.532 19:37:46 -- common/autotest_common.sh@817 -- # '[' -z 1607241 ']' 00:07:05.532 19:37:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.532 19:37:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.532 19:37:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.532 19:37:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.532 19:37:46 -- common/autotest_common.sh@10 -- # set +x 00:07:05.533 [2024-04-24 19:37:46.779986] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:07:05.533 [2024-04-24 19:37:46.780075] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.533 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.533 [2024-04-24 19:37:46.850159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.533 [2024-04-24 19:37:46.970837] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.533 [2024-04-24 19:37:46.970899] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.533 [2024-04-24 19:37:46.970916] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.533 [2024-04-24 19:37:46.970929] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.533 [2024-04-24 19:37:46.970953] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.533 [2024-04-24 19:37:46.971041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.533 [2024-04-24 19:37:46.971095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.533 [2024-04-24 19:37:46.971150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.533 [2024-04-24 19:37:46.971154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.468 19:37:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:06.468 19:37:47 -- common/autotest_common.sh@850 -- # return 0 00:07:06.468 19:37:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:06.468 19:37:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:06.468 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.468 19:37:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.468 19:37:47 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:06.468 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.468 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.468 [2024-04-24 19:37:47.782868] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.468 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.468 19:37:47 -- target/discovery.sh@26 -- # seq 1 4 00:07:06.468 19:37:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:06.468 19:37:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:06.468 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.468 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 Null1 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 [2024-04-24 19:37:47.823124] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:06.469 19:37:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 Null2 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:06.469 19:37:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 Null3 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:06.469 19:37:47 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 Null4 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:06.469 19:37:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.469 19:37:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.469 19:37:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.469 19:37:47 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:06.727 00:07:06.727 Discovery Log Number of Records 6, Generation counter 6 00:07:06.727 =====Discovery Log Entry 0====== 00:07:06.727 trtype: tcp 00:07:06.727 adrfam: ipv4 00:07:06.727 subtype: current discovery subsystem 00:07:06.727 treq: not required 00:07:06.727 portid: 0 00:07:06.727 trsvcid: 4420 00:07:06.727 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:06.727 traddr: 10.0.0.2 00:07:06.727 eflags: explicit discovery connections, duplicate discovery information 00:07:06.727 sectype: none 00:07:06.727 =====Discovery Log Entry 1====== 00:07:06.727 trtype: tcp 00:07:06.727 adrfam: ipv4 00:07:06.727 subtype: nvme subsystem 00:07:06.727 treq: not required 00:07:06.727 portid: 0 00:07:06.727 trsvcid: 4420 00:07:06.727 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:06.727 traddr: 10.0.0.2 00:07:06.727 eflags: none 00:07:06.727 sectype: none 00:07:06.727 =====Discovery Log Entry 2====== 00:07:06.727 trtype: tcp 00:07:06.727 adrfam: ipv4 00:07:06.727 subtype: nvme subsystem 00:07:06.727 treq: not required 00:07:06.727 portid: 0 00:07:06.727 trsvcid: 4420 00:07:06.727 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:06.727 traddr: 10.0.0.2 00:07:06.727 eflags: none 00:07:06.727 sectype: none 00:07:06.727 =====Discovery Log Entry 3====== 00:07:06.727 trtype: tcp 00:07:06.727 adrfam: ipv4 00:07:06.727 subtype: nvme subsystem 00:07:06.727 treq: not required 00:07:06.727 portid: 0 00:07:06.727 trsvcid: 4420 00:07:06.727 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:06.727 traddr: 10.0.0.2 00:07:06.727 eflags: none 00:07:06.727 sectype: none 00:07:06.727 =====Discovery Log Entry 4====== 00:07:06.727 trtype: tcp 00:07:06.727 adrfam: ipv4 00:07:06.727 subtype: nvme subsystem 00:07:06.727 treq: not required 00:07:06.727 portid: 0 00:07:06.727 trsvcid: 4420 00:07:06.727 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:06.727 traddr: 10.0.0.2 00:07:06.727 eflags: none 00:07:06.727 sectype: none 00:07:06.727 =====Discovery Log Entry 5====== 00:07:06.727 trtype: tcp 00:07:06.727 adrfam: ipv4 00:07:06.727 subtype: discovery subsystem referral 00:07:06.728 treq: not required 00:07:06.728 portid: 0 00:07:06.728 trsvcid: 4430 00:07:06.728 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:06.728 traddr: 10.0.0.2 00:07:06.728 eflags: none 00:07:06.728 sectype: none 00:07:06.728 19:37:48 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:06.728 Perform nvmf subsystem discovery via RPC 00:07:06.728 19:37:48 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 [2024-04-24 19:37:48.156078] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:06.728 [ 00:07:06.728 { 00:07:06.728 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:06.728 "subtype": "Discovery", 00:07:06.728 "listen_addresses": [ 00:07:06.728 { 00:07:06.728 "transport": "TCP", 00:07:06.728 "trtype": "TCP", 00:07:06.728 "adrfam": "IPv4", 00:07:06.728 "traddr": "10.0.0.2", 00:07:06.728 "trsvcid": "4420" 00:07:06.728 } 00:07:06.728 ], 00:07:06.728 "allow_any_host": true, 00:07:06.728 "hosts": [] 00:07:06.728 }, 00:07:06.728 { 00:07:06.728 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:06.728 "subtype": "NVMe", 00:07:06.728 "listen_addresses": [ 00:07:06.728 { 00:07:06.728 "transport": "TCP", 00:07:06.728 "trtype": "TCP", 00:07:06.728 "adrfam": "IPv4", 00:07:06.728 "traddr": "10.0.0.2", 00:07:06.728 "trsvcid": "4420" 00:07:06.728 } 00:07:06.728 ], 00:07:06.728 "allow_any_host": true, 00:07:06.728 "hosts": [], 00:07:06.728 "serial_number": "SPDK00000000000001", 00:07:06.728 "model_number": "SPDK bdev Controller", 00:07:06.728 "max_namespaces": 32, 00:07:06.728 "min_cntlid": 1, 00:07:06.728 "max_cntlid": 65519, 00:07:06.728 "namespaces": [ 00:07:06.728 { 00:07:06.728 "nsid": 1, 00:07:06.728 "bdev_name": "Null1", 00:07:06.728 "name": "Null1", 00:07:06.728 "nguid": "DD2628CFCA4C48219B9195F700ECB0AD", 00:07:06.728 "uuid": "dd2628cf-ca4c-4821-9b91-95f700ecb0ad" 00:07:06.728 } 00:07:06.728 ] 00:07:06.728 }, 00:07:06.728 { 00:07:06.728 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:06.728 "subtype": "NVMe", 00:07:06.728 "listen_addresses": [ 00:07:06.728 { 00:07:06.728 "transport": "TCP", 00:07:06.728 "trtype": "TCP", 00:07:06.728 "adrfam": "IPv4", 00:07:06.728 "traddr": "10.0.0.2", 00:07:06.728 "trsvcid": "4420" 00:07:06.728 } 00:07:06.728 ], 00:07:06.728 "allow_any_host": true, 00:07:06.728 "hosts": [], 00:07:06.728 "serial_number": "SPDK00000000000002", 00:07:06.728 "model_number": "SPDK bdev Controller", 00:07:06.728 "max_namespaces": 32, 00:07:06.728 "min_cntlid": 1, 00:07:06.728 "max_cntlid": 65519, 00:07:06.728 "namespaces": [ 00:07:06.728 { 00:07:06.728 "nsid": 1, 00:07:06.728 "bdev_name": "Null2", 00:07:06.728 "name": "Null2", 00:07:06.728 "nguid": "C1EE9B515763440EB12571DA953A3C8B", 00:07:06.728 "uuid": "c1ee9b51-5763-440e-b125-71da953a3c8b" 00:07:06.728 } 00:07:06.728 ] 00:07:06.728 }, 00:07:06.728 { 00:07:06.728 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:06.728 "subtype": "NVMe", 00:07:06.728 "listen_addresses": [ 00:07:06.728 { 00:07:06.728 "transport": "TCP", 00:07:06.728 "trtype": "TCP", 00:07:06.728 "adrfam": "IPv4", 00:07:06.728 "traddr": "10.0.0.2", 00:07:06.728 "trsvcid": "4420" 00:07:06.728 } 00:07:06.728 ], 00:07:06.728 "allow_any_host": true, 00:07:06.728 "hosts": [], 00:07:06.728 "serial_number": "SPDK00000000000003", 00:07:06.728 "model_number": "SPDK bdev Controller", 00:07:06.728 "max_namespaces": 32, 00:07:06.728 "min_cntlid": 1, 00:07:06.728 "max_cntlid": 65519, 00:07:06.728 "namespaces": [ 00:07:06.728 { 00:07:06.728 "nsid": 1, 00:07:06.728 "bdev_name": "Null3", 00:07:06.728 "name": "Null3", 00:07:06.728 "nguid": "49516A1B485A4944B0DC8DFB428C8CFA", 00:07:06.728 "uuid": "49516a1b-485a-4944-b0dc-8dfb428c8cfa" 00:07:06.728 } 00:07:06.728 ] 00:07:06.728 }, 00:07:06.728 { 00:07:06.728 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:06.728 "subtype": "NVMe", 00:07:06.728 "listen_addresses": [ 00:07:06.728 { 00:07:06.728 "transport": "TCP", 00:07:06.728 "trtype": "TCP", 00:07:06.728 "adrfam": "IPv4", 00:07:06.728 "traddr": "10.0.0.2", 00:07:06.728 "trsvcid": "4420" 00:07:06.728 } 00:07:06.728 ], 00:07:06.728 "allow_any_host": true, 00:07:06.728 "hosts": [], 00:07:06.728 "serial_number": "SPDK00000000000004", 00:07:06.728 "model_number": "SPDK bdev Controller", 00:07:06.728 "max_namespaces": 32, 00:07:06.728 "min_cntlid": 1, 00:07:06.728 "max_cntlid": 65519, 00:07:06.728 "namespaces": [ 00:07:06.728 { 00:07:06.728 "nsid": 1, 00:07:06.728 "bdev_name": "Null4", 00:07:06.728 "name": "Null4", 00:07:06.728 "nguid": "2C5B2B1928264DF78186F18039DABBE5", 00:07:06.728 "uuid": "2c5b2b19-2826-4df7-8186-f18039dabbe5" 00:07:06.728 } 00:07:06.728 ] 00:07:06.728 } 00:07:06.728 ] 00:07:06.728 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.728 19:37:48 -- target/discovery.sh@42 -- # seq 1 4 00:07:06.728 19:37:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:06.728 19:37:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.728 19:37:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.728 19:37:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:06.728 19:37:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.728 19:37:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.728 19:37:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:06.728 19:37:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.728 19:37:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.728 19:37:48 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:06.728 19:37:48 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.728 19:37:48 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.728 19:37:48 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:06.728 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.728 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.987 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.987 19:37:48 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:06.987 19:37:48 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:06.987 19:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.987 19:37:48 -- common/autotest_common.sh@10 -- # set +x 00:07:06.987 19:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.987 19:37:48 -- target/discovery.sh@49 -- # check_bdevs= 00:07:06.987 19:37:48 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:06.987 19:37:48 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:06.987 19:37:48 -- target/discovery.sh@57 -- # nvmftestfini 00:07:06.987 19:37:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:06.987 19:37:48 -- nvmf/common.sh@117 -- # sync 00:07:06.987 19:37:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:06.987 19:37:48 -- nvmf/common.sh@120 -- # set +e 00:07:06.987 19:37:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:06.987 19:37:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:06.987 rmmod nvme_tcp 00:07:06.987 rmmod nvme_fabrics 00:07:06.987 rmmod nvme_keyring 00:07:06.987 19:37:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:06.988 19:37:48 -- nvmf/common.sh@124 -- # set -e 00:07:06.988 19:37:48 -- nvmf/common.sh@125 -- # return 0 00:07:06.988 19:37:48 -- nvmf/common.sh@478 -- # '[' -n 1607241 ']' 00:07:06.988 19:37:48 -- nvmf/common.sh@479 -- # killprocess 1607241 00:07:06.988 19:37:48 -- common/autotest_common.sh@936 -- # '[' -z 1607241 ']' 00:07:06.988 19:37:48 -- common/autotest_common.sh@940 -- # kill -0 1607241 00:07:06.988 19:37:48 -- common/autotest_common.sh@941 -- # uname 00:07:06.988 19:37:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.988 19:37:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1607241 00:07:06.988 19:37:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.988 19:37:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.988 19:37:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1607241' 00:07:06.988 killing process with pid 1607241 00:07:06.988 19:37:48 -- common/autotest_common.sh@955 -- # kill 1607241 00:07:06.988 [2024-04-24 19:37:48.370879] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:06.988 19:37:48 -- common/autotest_common.sh@960 -- # wait 1607241 00:07:07.247 19:37:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:07.247 19:37:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:07.247 19:37:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:07.247 19:37:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.247 19:37:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:07.247 19:37:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.247 19:37:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.247 19:37:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.183 19:37:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:09.183 00:07:09.183 real 0m6.341s 00:07:09.183 user 0m7.675s 00:07:09.183 sys 0m1.913s 00:07:09.183 19:37:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.183 19:37:50 -- common/autotest_common.sh@10 -- # set +x 00:07:09.183 ************************************ 00:07:09.183 END TEST nvmf_discovery 00:07:09.183 ************************************ 00:07:09.442 19:37:50 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:09.442 19:37:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:09.442 19:37:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.442 19:37:50 -- common/autotest_common.sh@10 -- # set +x 00:07:09.442 ************************************ 00:07:09.442 START TEST nvmf_referrals 00:07:09.442 ************************************ 00:07:09.442 19:37:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:09.442 * Looking for test storage... 00:07:09.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.442 19:37:50 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.442 19:37:50 -- nvmf/common.sh@7 -- # uname -s 00:07:09.442 19:37:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.442 19:37:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.442 19:37:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.442 19:37:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.442 19:37:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.442 19:37:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.442 19:37:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.442 19:37:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.442 19:37:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.442 19:37:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.442 19:37:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.442 19:37:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.442 19:37:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.442 19:37:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.442 19:37:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.442 19:37:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.442 19:37:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.442 19:37:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.442 19:37:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.442 19:37:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.442 19:37:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.442 19:37:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.443 19:37:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.443 19:37:50 -- paths/export.sh@5 -- # export PATH 00:07:09.443 19:37:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.443 19:37:50 -- nvmf/common.sh@47 -- # : 0 00:07:09.443 19:37:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:09.443 19:37:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:09.443 19:37:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.443 19:37:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.443 19:37:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.443 19:37:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:09.443 19:37:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:09.443 19:37:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:09.443 19:37:50 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:09.443 19:37:50 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:09.443 19:37:50 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:09.443 19:37:50 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:09.443 19:37:50 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:09.443 19:37:50 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:09.443 19:37:50 -- target/referrals.sh@37 -- # nvmftestinit 00:07:09.443 19:37:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:09.443 19:37:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.443 19:37:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:09.443 19:37:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:09.443 19:37:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:09.443 19:37:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.443 19:37:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.443 19:37:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.443 19:37:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:09.443 19:37:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:09.443 19:37:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:09.443 19:37:50 -- common/autotest_common.sh@10 -- # set +x 00:07:11.977 19:37:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:11.977 19:37:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:11.978 19:37:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:11.978 19:37:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:11.978 19:37:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:11.978 19:37:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:11.978 19:37:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:11.978 19:37:52 -- nvmf/common.sh@295 -- # net_devs=() 00:07:11.978 19:37:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:11.978 19:37:52 -- nvmf/common.sh@296 -- # e810=() 00:07:11.978 19:37:52 -- nvmf/common.sh@296 -- # local -ga e810 00:07:11.978 19:37:52 -- nvmf/common.sh@297 -- # x722=() 00:07:11.978 19:37:52 -- nvmf/common.sh@297 -- # local -ga x722 00:07:11.978 19:37:52 -- nvmf/common.sh@298 -- # mlx=() 00:07:11.978 19:37:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:11.978 19:37:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.978 19:37:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:11.978 19:37:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:11.978 19:37:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:11.978 19:37:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:11.978 19:37:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:11.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:11.978 19:37:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:11.978 19:37:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:11.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:11.978 19:37:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:11.978 19:37:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:11.978 19:37:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.978 19:37:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:11.978 19:37:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.978 19:37:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:11.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:11.978 19:37:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.978 19:37:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:11.978 19:37:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.978 19:37:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:11.978 19:37:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.978 19:37:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:11.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:11.978 19:37:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.978 19:37:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:11.978 19:37:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:11.978 19:37:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:11.978 19:37:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:11.978 19:37:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.978 19:37:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.978 19:37:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.978 19:37:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:11.978 19:37:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.978 19:37:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.978 19:37:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:11.978 19:37:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.978 19:37:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.978 19:37:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:11.978 19:37:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:11.978 19:37:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.978 19:37:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.978 19:37:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.978 19:37:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.978 19:37:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:11.978 19:37:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.978 19:37:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.978 19:37:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.978 19:37:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:11.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:07:11.978 00:07:11.978 --- 10.0.0.2 ping statistics --- 00:07:11.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.978 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:11.978 19:37:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:07:11.978 00:07:11.978 --- 10.0.0.1 ping statistics --- 00:07:11.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.978 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:07:11.978 19:37:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.978 19:37:53 -- nvmf/common.sh@411 -- # return 0 00:07:11.978 19:37:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:11.978 19:37:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.978 19:37:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:11.978 19:37:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:11.978 19:37:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.978 19:37:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:11.978 19:37:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:11.978 19:37:53 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:11.978 19:37:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:11.978 19:37:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:11.978 19:37:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.978 19:37:53 -- nvmf/common.sh@470 -- # nvmfpid=1609359 00:07:11.978 19:37:53 -- nvmf/common.sh@471 -- # waitforlisten 1609359 00:07:11.978 19:37:53 -- common/autotest_common.sh@817 -- # '[' -z 1609359 ']' 00:07:11.978 19:37:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.978 19:37:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.978 19:37:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.978 19:37:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:11.978 19:37:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.978 19:37:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.978 [2024-04-24 19:37:53.169423] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:07:11.978 [2024-04-24 19:37:53.169515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.978 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.978 [2024-04-24 19:37:53.239521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.978 [2024-04-24 19:37:53.361377] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.978 [2024-04-24 19:37:53.361435] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.978 [2024-04-24 19:37:53.361452] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.978 [2024-04-24 19:37:53.361466] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.978 [2024-04-24 19:37:53.361478] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.978 [2024-04-24 19:37:53.361547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.978 [2024-04-24 19:37:53.361606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.978 [2024-04-24 19:37:53.361646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.978 [2024-04-24 19:37:53.361665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.911 19:37:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:12.911 19:37:54 -- common/autotest_common.sh@850 -- # return 0 00:07:12.911 19:37:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:12.911 19:37:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:12.911 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 19:37:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.911 19:37:54 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:12.911 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.911 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 [2024-04-24 19:37:54.140746] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.911 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.911 19:37:54 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:12.911 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.911 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 [2024-04-24 19:37:54.152928] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:12.911 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.911 19:37:54 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:12.911 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.911 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.911 19:37:54 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:12.911 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.911 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.911 19:37:54 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:12.911 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.911 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.911 19:37:54 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:12.911 19:37:54 -- target/referrals.sh@48 -- # jq length 00:07:12.911 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.911 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.911 19:37:54 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:12.911 19:37:54 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:12.911 19:37:54 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:12.911 19:37:54 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:12.911 19:37:54 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:12.911 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.911 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:12.911 19:37:54 -- target/referrals.sh@21 -- # sort 00:07:12.911 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.911 19:37:54 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:12.911 19:37:54 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:12.911 19:37:54 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:12.911 19:37:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:12.911 19:37:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:12.911 19:37:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:12.911 19:37:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:12.911 19:37:54 -- target/referrals.sh@26 -- # sort 00:07:13.170 19:37:54 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:13.170 19:37:54 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:13.170 19:37:54 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:13.170 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.170 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.170 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.170 19:37:54 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:13.170 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.170 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.170 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.170 19:37:54 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:13.170 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.170 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.170 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.170 19:37:54 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:13.170 19:37:54 -- target/referrals.sh@56 -- # jq length 00:07:13.170 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.170 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.170 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.170 19:37:54 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:13.170 19:37:54 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:13.170 19:37:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:13.170 19:37:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:13.170 19:37:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.170 19:37:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:13.170 19:37:54 -- target/referrals.sh@26 -- # sort 00:07:13.170 19:37:54 -- target/referrals.sh@26 -- # echo 00:07:13.170 19:37:54 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:13.170 19:37:54 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:13.170 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.170 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.170 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.170 19:37:54 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:13.170 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.170 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.170 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.170 19:37:54 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:13.170 19:37:54 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:13.170 19:37:54 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:13.170 19:37:54 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:13.170 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.170 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.170 19:37:54 -- target/referrals.sh@21 -- # sort 00:07:13.170 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.170 19:37:54 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:13.170 19:37:54 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:13.170 19:37:54 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:13.170 19:37:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:13.170 19:37:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:13.170 19:37:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.170 19:37:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:13.170 19:37:54 -- target/referrals.sh@26 -- # sort 00:07:13.428 19:37:54 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:13.428 19:37:54 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:13.428 19:37:54 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:13.428 19:37:54 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:13.428 19:37:54 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:13.428 19:37:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.428 19:37:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:13.428 19:37:54 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:13.428 19:37:54 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:13.428 19:37:54 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:13.428 19:37:54 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:13.428 19:37:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.428 19:37:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:13.428 19:37:54 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:13.428 19:37:54 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:13.428 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.428 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.428 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.428 19:37:54 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:13.428 19:37:54 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:13.428 19:37:54 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:13.428 19:37:54 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:13.428 19:37:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.428 19:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:13.428 19:37:54 -- target/referrals.sh@21 -- # sort 00:07:13.428 19:37:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.686 19:37:54 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:13.686 19:37:54 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:13.686 19:37:54 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:13.686 19:37:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:13.686 19:37:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:13.686 19:37:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.686 19:37:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:13.686 19:37:54 -- target/referrals.sh@26 -- # sort 00:07:13.686 19:37:55 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:13.686 19:37:55 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:13.686 19:37:55 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:13.686 19:37:55 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:13.686 19:37:55 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:13.686 19:37:55 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.686 19:37:55 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:13.944 19:37:55 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:13.944 19:37:55 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:13.944 19:37:55 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:13.944 19:37:55 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:13.944 19:37:55 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.944 19:37:55 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:13.944 19:37:55 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:13.944 19:37:55 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:13.944 19:37:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.944 19:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.944 19:37:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.944 19:37:55 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:13.944 19:37:55 -- target/referrals.sh@82 -- # jq length 00:07:13.944 19:37:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.944 19:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.944 19:37:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.944 19:37:55 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:13.944 19:37:55 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:13.944 19:37:55 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:13.944 19:37:55 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:13.944 19:37:55 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.944 19:37:55 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:13.944 19:37:55 -- target/referrals.sh@26 -- # sort 00:07:14.202 19:37:55 -- target/referrals.sh@26 -- # echo 00:07:14.203 19:37:55 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:14.203 19:37:55 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:14.203 19:37:55 -- target/referrals.sh@86 -- # nvmftestfini 00:07:14.203 19:37:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:14.203 19:37:55 -- nvmf/common.sh@117 -- # sync 00:07:14.203 19:37:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.203 19:37:55 -- nvmf/common.sh@120 -- # set +e 00:07:14.203 19:37:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.203 19:37:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.203 rmmod nvme_tcp 00:07:14.203 rmmod nvme_fabrics 00:07:14.203 rmmod nvme_keyring 00:07:14.203 19:37:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.203 19:37:55 -- nvmf/common.sh@124 -- # set -e 00:07:14.203 19:37:55 -- nvmf/common.sh@125 -- # return 0 00:07:14.203 19:37:55 -- nvmf/common.sh@478 -- # '[' -n 1609359 ']' 00:07:14.203 19:37:55 -- nvmf/common.sh@479 -- # killprocess 1609359 00:07:14.203 19:37:55 -- common/autotest_common.sh@936 -- # '[' -z 1609359 ']' 00:07:14.203 19:37:55 -- common/autotest_common.sh@940 -- # kill -0 1609359 00:07:14.203 19:37:55 -- common/autotest_common.sh@941 -- # uname 00:07:14.203 19:37:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.203 19:37:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1609359 00:07:14.203 19:37:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.203 19:37:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.203 19:37:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1609359' 00:07:14.203 killing process with pid 1609359 00:07:14.203 19:37:55 -- common/autotest_common.sh@955 -- # kill 1609359 00:07:14.203 19:37:55 -- common/autotest_common.sh@960 -- # wait 1609359 00:07:14.462 19:37:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:14.462 19:37:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:14.462 19:37:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:14.462 19:37:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:14.462 19:37:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:14.462 19:37:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.462 19:37:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.462 19:37:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.996 19:37:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:16.996 00:07:16.996 real 0m7.064s 00:07:16.996 user 0m11.339s 00:07:16.996 sys 0m2.177s 00:07:16.996 19:37:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.996 19:37:57 -- common/autotest_common.sh@10 -- # set +x 00:07:16.996 ************************************ 00:07:16.996 END TEST nvmf_referrals 00:07:16.996 ************************************ 00:07:16.996 19:37:57 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:16.996 19:37:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:16.996 19:37:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.996 19:37:57 -- common/autotest_common.sh@10 -- # set +x 00:07:16.996 ************************************ 00:07:16.996 START TEST nvmf_connect_disconnect 00:07:16.996 ************************************ 00:07:16.996 19:37:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:16.996 * Looking for test storage... 00:07:16.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.996 19:37:58 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.996 19:37:58 -- nvmf/common.sh@7 -- # uname -s 00:07:16.996 19:37:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.996 19:37:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.996 19:37:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.996 19:37:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.996 19:37:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.996 19:37:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.996 19:37:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.996 19:37:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.996 19:37:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.996 19:37:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.996 19:37:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:16.996 19:37:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:16.996 19:37:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.996 19:37:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.996 19:37:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.996 19:37:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.996 19:37:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.996 19:37:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.996 19:37:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.996 19:37:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.996 19:37:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.996 19:37:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.996 19:37:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.996 19:37:58 -- paths/export.sh@5 -- # export PATH 00:07:16.996 19:37:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.996 19:37:58 -- nvmf/common.sh@47 -- # : 0 00:07:16.996 19:37:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:16.996 19:37:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:16.996 19:37:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.996 19:37:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.996 19:37:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.996 19:37:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:16.996 19:37:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:16.997 19:37:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:16.997 19:37:58 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:16.997 19:37:58 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:16.997 19:37:58 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:16.997 19:37:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:16.997 19:37:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.997 19:37:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:16.997 19:37:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:16.997 19:37:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:16.997 19:37:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.997 19:37:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.997 19:37:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.997 19:37:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:16.997 19:37:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:16.997 19:37:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:16.997 19:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:18.898 19:37:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:18.898 19:37:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:18.898 19:37:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:18.898 19:37:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:18.898 19:37:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:18.898 19:37:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:18.898 19:37:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:18.898 19:37:59 -- nvmf/common.sh@295 -- # net_devs=() 00:07:18.898 19:37:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:18.898 19:37:59 -- nvmf/common.sh@296 -- # e810=() 00:07:18.898 19:37:59 -- nvmf/common.sh@296 -- # local -ga e810 00:07:18.898 19:37:59 -- nvmf/common.sh@297 -- # x722=() 00:07:18.898 19:37:59 -- nvmf/common.sh@297 -- # local -ga x722 00:07:18.898 19:37:59 -- nvmf/common.sh@298 -- # mlx=() 00:07:18.898 19:37:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:18.898 19:37:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.898 19:37:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:18.898 19:37:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:18.898 19:37:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:18.898 19:37:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.898 19:37:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:18.898 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:18.898 19:37:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.898 19:37:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:18.898 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:18.898 19:37:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:18.898 19:37:59 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.898 19:37:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.898 19:37:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:18.898 19:37:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.898 19:37:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:18.898 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:18.898 19:37:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.898 19:37:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.898 19:37:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.898 19:37:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:18.898 19:37:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.898 19:37:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:18.898 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:18.898 19:37:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.898 19:37:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:18.898 19:37:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:18.898 19:37:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:18.898 19:37:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:18.898 19:37:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:18.898 19:37:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:18.898 19:37:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:18.898 19:37:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:18.898 19:37:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:18.898 19:37:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:18.898 19:37:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:18.898 19:37:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:18.898 19:37:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:18.898 19:37:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:18.898 19:37:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:18.898 19:37:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:18.898 19:37:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:18.898 19:38:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:18.898 19:38:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:18.898 19:38:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:18.898 19:38:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:18.898 19:38:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:18.898 19:38:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:18.898 19:38:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:18.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:18.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:18.898 00:07:18.898 --- 10.0.0.2 ping statistics --- 00:07:18.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.898 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:18.898 19:38:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:18.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:18.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:07:18.898 00:07:18.898 --- 10.0.0.1 ping statistics --- 00:07:18.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:18.898 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:07:18.898 19:38:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:18.898 19:38:00 -- nvmf/common.sh@411 -- # return 0 00:07:18.898 19:38:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:18.899 19:38:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:18.899 19:38:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:18.899 19:38:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:18.899 19:38:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:18.899 19:38:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:18.899 19:38:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:18.899 19:38:00 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:18.899 19:38:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:18.899 19:38:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:18.899 19:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:18.899 19:38:00 -- nvmf/common.sh@470 -- # nvmfpid=1611671 00:07:18.899 19:38:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:18.899 19:38:00 -- nvmf/common.sh@471 -- # waitforlisten 1611671 00:07:18.899 19:38:00 -- common/autotest_common.sh@817 -- # '[' -z 1611671 ']' 00:07:18.899 19:38:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.899 19:38:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:18.899 19:38:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.899 19:38:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:18.899 19:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:18.899 [2024-04-24 19:38:00.190753] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:07:18.899 [2024-04-24 19:38:00.190846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.899 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.899 [2024-04-24 19:38:00.260311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.899 [2024-04-24 19:38:00.380084] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.899 [2024-04-24 19:38:00.380151] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.899 [2024-04-24 19:38:00.380174] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.899 [2024-04-24 19:38:00.380188] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.899 [2024-04-24 19:38:00.380200] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.899 [2024-04-24 19:38:00.380287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.899 [2024-04-24 19:38:00.380341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.899 [2024-04-24 19:38:00.380394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.899 [2024-04-24 19:38:00.380397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.833 19:38:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:19.833 19:38:01 -- common/autotest_common.sh@850 -- # return 0 00:07:19.833 19:38:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:19.833 19:38:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:19.833 19:38:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.833 19:38:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.833 19:38:01 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:19.833 19:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.833 19:38:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.833 [2024-04-24 19:38:01.198843] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.833 19:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.833 19:38:01 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:19.833 19:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.833 19:38:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.833 19:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.833 19:38:01 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:19.833 19:38:01 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.833 19:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.833 19:38:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.833 19:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.833 19:38:01 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:19.833 19:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.833 19:38:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.833 19:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.833 19:38:01 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.833 19:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.833 19:38:01 -- common/autotest_common.sh@10 -- # set +x 00:07:19.833 [2024-04-24 19:38:01.256174] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.833 19:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.833 19:38:01 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:19.833 19:38:01 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:19.833 19:38:01 -- target/connect_disconnect.sh@34 -- # set +x 00:07:23.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.001 19:38:14 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:34.001 19:38:14 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:34.001 19:38:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:34.001 19:38:14 -- nvmf/common.sh@117 -- # sync 00:07:34.001 19:38:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.001 19:38:14 -- nvmf/common.sh@120 -- # set +e 00:07:34.001 19:38:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.001 19:38:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.001 rmmod nvme_tcp 00:07:34.001 rmmod nvme_fabrics 00:07:34.001 rmmod nvme_keyring 00:07:34.001 19:38:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.001 19:38:14 -- nvmf/common.sh@124 -- # set -e 00:07:34.001 19:38:14 -- nvmf/common.sh@125 -- # return 0 00:07:34.001 19:38:14 -- nvmf/common.sh@478 -- # '[' -n 1611671 ']' 00:07:34.001 19:38:14 -- nvmf/common.sh@479 -- # killprocess 1611671 00:07:34.001 19:38:14 -- common/autotest_common.sh@936 -- # '[' -z 1611671 ']' 00:07:34.001 19:38:14 -- common/autotest_common.sh@940 -- # kill -0 1611671 00:07:34.001 19:38:14 -- common/autotest_common.sh@941 -- # uname 00:07:34.001 19:38:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:34.001 19:38:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1611671 00:07:34.001 19:38:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:34.001 19:38:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:34.001 19:38:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1611671' 00:07:34.001 killing process with pid 1611671 00:07:34.001 19:38:14 -- common/autotest_common.sh@955 -- # kill 1611671 00:07:34.001 19:38:14 -- common/autotest_common.sh@960 -- # wait 1611671 00:07:34.001 19:38:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:34.001 19:38:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:34.001 19:38:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:34.001 19:38:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.001 19:38:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.001 19:38:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.001 19:38:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.001 19:38:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.905 19:38:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.905 00:07:35.905 real 0m19.265s 00:07:35.905 user 0m58.825s 00:07:35.905 sys 0m3.387s 00:07:35.905 19:38:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.905 19:38:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.905 ************************************ 00:07:35.905 END TEST nvmf_connect_disconnect 00:07:35.905 ************************************ 00:07:35.905 19:38:17 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:35.905 19:38:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:35.905 19:38:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.905 19:38:17 -- common/autotest_common.sh@10 -- # set +x 00:07:35.905 ************************************ 00:07:35.905 START TEST nvmf_multitarget 00:07:35.905 ************************************ 00:07:35.905 19:38:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:36.164 * Looking for test storage... 00:07:36.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.164 19:38:17 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.164 19:38:17 -- nvmf/common.sh@7 -- # uname -s 00:07:36.164 19:38:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.164 19:38:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.164 19:38:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.164 19:38:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.164 19:38:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.164 19:38:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.164 19:38:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.164 19:38:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.164 19:38:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.164 19:38:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.164 19:38:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.164 19:38:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:36.164 19:38:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.164 19:38:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.164 19:38:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.164 19:38:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.164 19:38:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.164 19:38:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.164 19:38:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.164 19:38:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.164 19:38:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.164 19:38:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.164 19:38:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.164 19:38:17 -- paths/export.sh@5 -- # export PATH 00:07:36.164 19:38:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.165 19:38:17 -- nvmf/common.sh@47 -- # : 0 00:07:36.165 19:38:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.165 19:38:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.165 19:38:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.165 19:38:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.165 19:38:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.165 19:38:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.165 19:38:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.165 19:38:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.165 19:38:17 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:36.165 19:38:17 -- target/multitarget.sh@15 -- # nvmftestinit 00:07:36.165 19:38:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:36.165 19:38:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.165 19:38:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:36.165 19:38:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:36.165 19:38:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:36.165 19:38:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.165 19:38:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.165 19:38:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.165 19:38:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:36.165 19:38:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:36.165 19:38:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.165 19:38:17 -- common/autotest_common.sh@10 -- # set +x 00:07:38.696 19:38:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:38.696 19:38:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:38.696 19:38:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:38.696 19:38:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:38.696 19:38:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:38.696 19:38:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:38.696 19:38:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:38.696 19:38:19 -- nvmf/common.sh@295 -- # net_devs=() 00:07:38.696 19:38:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:38.696 19:38:19 -- nvmf/common.sh@296 -- # e810=() 00:07:38.696 19:38:19 -- nvmf/common.sh@296 -- # local -ga e810 00:07:38.696 19:38:19 -- nvmf/common.sh@297 -- # x722=() 00:07:38.696 19:38:19 -- nvmf/common.sh@297 -- # local -ga x722 00:07:38.696 19:38:19 -- nvmf/common.sh@298 -- # mlx=() 00:07:38.696 19:38:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:38.696 19:38:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:38.696 19:38:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:38.696 19:38:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:38.696 19:38:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:38.696 19:38:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.696 19:38:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:38.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:38.696 19:38:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:38.696 19:38:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:38.696 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:38.696 19:38:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:38.696 19:38:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.696 19:38:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.696 19:38:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:38.696 19:38:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.696 19:38:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:38.696 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:38.696 19:38:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.696 19:38:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:38.696 19:38:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:38.696 19:38:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:38.696 19:38:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:38.696 19:38:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:38.696 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:38.696 19:38:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:38.696 19:38:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:38.696 19:38:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:38.696 19:38:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:38.696 19:38:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:38.696 19:38:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:38.696 19:38:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:38.696 19:38:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:38.696 19:38:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:38.696 19:38:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:38.696 19:38:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:38.696 19:38:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:38.696 19:38:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:38.696 19:38:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:38.696 19:38:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:38.696 19:38:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:38.696 19:38:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:38.696 19:38:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:38.696 19:38:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:38.696 19:38:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:38.696 19:38:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:38.696 19:38:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:38.696 19:38:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:38.696 19:38:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:38.696 19:38:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:38.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:38.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:07:38.696 00:07:38.696 --- 10.0.0.2 ping statistics --- 00:07:38.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.697 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:07:38.697 19:38:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:38.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:38.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:07:38.697 00:07:38.697 --- 10.0.0.1 ping statistics --- 00:07:38.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:38.697 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:07:38.697 19:38:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:38.697 19:38:19 -- nvmf/common.sh@411 -- # return 0 00:07:38.697 19:38:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:38.697 19:38:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:38.697 19:38:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:38.697 19:38:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:38.697 19:38:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:38.697 19:38:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:38.697 19:38:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:38.697 19:38:19 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:38.697 19:38:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:38.697 19:38:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:38.697 19:38:19 -- common/autotest_common.sh@10 -- # set +x 00:07:38.697 19:38:19 -- nvmf/common.sh@470 -- # nvmfpid=1615560 00:07:38.697 19:38:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:38.697 19:38:19 -- nvmf/common.sh@471 -- # waitforlisten 1615560 00:07:38.697 19:38:19 -- common/autotest_common.sh@817 -- # '[' -z 1615560 ']' 00:07:38.697 19:38:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.697 19:38:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:38.697 19:38:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.697 19:38:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:38.697 19:38:19 -- common/autotest_common.sh@10 -- # set +x 00:07:38.697 [2024-04-24 19:38:19.823077] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:07:38.697 [2024-04-24 19:38:19.823147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.697 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.697 [2024-04-24 19:38:19.894283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.697 [2024-04-24 19:38:20.022269] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.697 [2024-04-24 19:38:20.022335] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.697 [2024-04-24 19:38:20.022352] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.697 [2024-04-24 19:38:20.022365] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.697 [2024-04-24 19:38:20.022377] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.697 [2024-04-24 19:38:20.022441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.697 [2024-04-24 19:38:20.022474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.697 [2024-04-24 19:38:20.022530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.697 [2024-04-24 19:38:20.022533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.697 19:38:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:38.697 19:38:20 -- common/autotest_common.sh@850 -- # return 0 00:07:38.697 19:38:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:38.697 19:38:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:38.697 19:38:20 -- common/autotest_common.sh@10 -- # set +x 00:07:38.697 19:38:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.697 19:38:20 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:38.697 19:38:20 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:38.697 19:38:20 -- target/multitarget.sh@21 -- # jq length 00:07:38.954 19:38:20 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:38.954 19:38:20 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:38.954 "nvmf_tgt_1" 00:07:38.954 19:38:20 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:39.211 "nvmf_tgt_2" 00:07:39.211 19:38:20 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:39.211 19:38:20 -- target/multitarget.sh@28 -- # jq length 00:07:39.211 19:38:20 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:39.211 19:38:20 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:39.469 true 00:07:39.469 19:38:20 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:39.469 true 00:07:39.469 19:38:20 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:39.469 19:38:20 -- target/multitarget.sh@35 -- # jq length 00:07:39.728 19:38:21 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:39.728 19:38:21 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:39.728 19:38:21 -- target/multitarget.sh@41 -- # nvmftestfini 00:07:39.728 19:38:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:39.728 19:38:21 -- nvmf/common.sh@117 -- # sync 00:07:39.728 19:38:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.728 19:38:21 -- nvmf/common.sh@120 -- # set +e 00:07:39.728 19:38:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.728 19:38:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.728 rmmod nvme_tcp 00:07:39.728 rmmod nvme_fabrics 00:07:39.728 rmmod nvme_keyring 00:07:39.728 19:38:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.728 19:38:21 -- nvmf/common.sh@124 -- # set -e 00:07:39.728 19:38:21 -- nvmf/common.sh@125 -- # return 0 00:07:39.728 19:38:21 -- nvmf/common.sh@478 -- # '[' -n 1615560 ']' 00:07:39.728 19:38:21 -- nvmf/common.sh@479 -- # killprocess 1615560 00:07:39.728 19:38:21 -- common/autotest_common.sh@936 -- # '[' -z 1615560 ']' 00:07:39.728 19:38:21 -- common/autotest_common.sh@940 -- # kill -0 1615560 00:07:39.728 19:38:21 -- common/autotest_common.sh@941 -- # uname 00:07:39.728 19:38:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:39.728 19:38:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1615560 00:07:39.728 19:38:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:39.728 19:38:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:39.728 19:38:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1615560' 00:07:39.728 killing process with pid 1615560 00:07:39.728 19:38:21 -- common/autotest_common.sh@955 -- # kill 1615560 00:07:39.728 19:38:21 -- common/autotest_common.sh@960 -- # wait 1615560 00:07:39.988 19:38:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:39.988 19:38:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:39.988 19:38:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:39.988 19:38:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:39.988 19:38:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:39.988 19:38:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.988 19:38:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.988 19:38:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.521 19:38:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:42.521 00:07:42.521 real 0m6.021s 00:07:42.521 user 0m6.834s 00:07:42.521 sys 0m2.033s 00:07:42.521 19:38:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.521 19:38:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.521 ************************************ 00:07:42.521 END TEST nvmf_multitarget 00:07:42.521 ************************************ 00:07:42.521 19:38:23 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:42.521 19:38:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:42.521 19:38:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.521 19:38:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.521 ************************************ 00:07:42.521 START TEST nvmf_rpc 00:07:42.521 ************************************ 00:07:42.521 19:38:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:42.522 * Looking for test storage... 00:07:42.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.522 19:38:23 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.522 19:38:23 -- nvmf/common.sh@7 -- # uname -s 00:07:42.522 19:38:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.522 19:38:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.522 19:38:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.522 19:38:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.522 19:38:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.522 19:38:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.522 19:38:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.522 19:38:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.522 19:38:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.522 19:38:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.522 19:38:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:42.522 19:38:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:42.522 19:38:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.522 19:38:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.522 19:38:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.522 19:38:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.522 19:38:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.522 19:38:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.522 19:38:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.522 19:38:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.522 19:38:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.522 19:38:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.522 19:38:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.522 19:38:23 -- paths/export.sh@5 -- # export PATH 00:07:42.522 19:38:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.522 19:38:23 -- nvmf/common.sh@47 -- # : 0 00:07:42.522 19:38:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.522 19:38:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.522 19:38:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.522 19:38:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.522 19:38:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.522 19:38:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.522 19:38:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.522 19:38:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.522 19:38:23 -- target/rpc.sh@11 -- # loops=5 00:07:42.522 19:38:23 -- target/rpc.sh@23 -- # nvmftestinit 00:07:42.522 19:38:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:42.522 19:38:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.522 19:38:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:42.522 19:38:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:42.522 19:38:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:42.522 19:38:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.522 19:38:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.522 19:38:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.522 19:38:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:42.522 19:38:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:42.522 19:38:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:42.522 19:38:23 -- common/autotest_common.sh@10 -- # set +x 00:07:44.421 19:38:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:44.421 19:38:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.421 19:38:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.421 19:38:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.421 19:38:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.421 19:38:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.421 19:38:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.421 19:38:25 -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.421 19:38:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.421 19:38:25 -- nvmf/common.sh@296 -- # e810=() 00:07:44.421 19:38:25 -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.421 19:38:25 -- nvmf/common.sh@297 -- # x722=() 00:07:44.421 19:38:25 -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.421 19:38:25 -- nvmf/common.sh@298 -- # mlx=() 00:07:44.421 19:38:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.421 19:38:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.421 19:38:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.421 19:38:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:44.421 19:38:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:44.421 19:38:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:44.421 19:38:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:44.421 19:38:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.421 19:38:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.421 19:38:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:44.421 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:44.421 19:38:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.421 19:38:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.421 19:38:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.422 19:38:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:44.422 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:44.422 19:38:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.422 19:38:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.422 19:38:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.422 19:38:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:44.422 19:38:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.422 19:38:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:44.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:44.422 19:38:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.422 19:38:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.422 19:38:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.422 19:38:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:44.422 19:38:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.422 19:38:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:44.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:44.422 19:38:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.422 19:38:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:44.422 19:38:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:44.422 19:38:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:44.422 19:38:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.422 19:38:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.422 19:38:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.422 19:38:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.422 19:38:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.422 19:38:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.422 19:38:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.422 19:38:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.422 19:38:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.422 19:38:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.422 19:38:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.422 19:38:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.422 19:38:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.422 19:38:25 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.422 19:38:25 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.422 19:38:25 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.422 19:38:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.422 19:38:25 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.422 19:38:25 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.422 19:38:25 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:07:44.422 00:07:44.422 --- 10.0.0.2 ping statistics --- 00:07:44.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.422 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:07:44.422 19:38:25 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:07:44.422 00:07:44.422 --- 10.0.0.1 ping statistics --- 00:07:44.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.422 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:07:44.422 19:38:25 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.422 19:38:25 -- nvmf/common.sh@411 -- # return 0 00:07:44.422 19:38:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:44.422 19:38:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.422 19:38:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:44.422 19:38:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.422 19:38:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:44.422 19:38:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:44.422 19:38:25 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:44.422 19:38:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:44.422 19:38:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:44.422 19:38:25 -- common/autotest_common.sh@10 -- # set +x 00:07:44.422 19:38:25 -- nvmf/common.sh@470 -- # nvmfpid=1617667 00:07:44.422 19:38:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:44.422 19:38:25 -- nvmf/common.sh@471 -- # waitforlisten 1617667 00:07:44.422 19:38:25 -- common/autotest_common.sh@817 -- # '[' -z 1617667 ']' 00:07:44.422 19:38:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.422 19:38:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:44.422 19:38:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.422 19:38:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:44.422 19:38:25 -- common/autotest_common.sh@10 -- # set +x 00:07:44.422 [2024-04-24 19:38:25.914922] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:07:44.422 [2024-04-24 19:38:25.915006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.679 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.680 [2024-04-24 19:38:25.977777] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.680 [2024-04-24 19:38:26.089335] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.680 [2024-04-24 19:38:26.089397] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.680 [2024-04-24 19:38:26.089413] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.680 [2024-04-24 19:38:26.089427] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.680 [2024-04-24 19:38:26.089438] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.680 [2024-04-24 19:38:26.089520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.680 [2024-04-24 19:38:26.089574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.680 [2024-04-24 19:38:26.089885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.680 [2024-04-24 19:38:26.089890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.613 19:38:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:45.613 19:38:26 -- common/autotest_common.sh@850 -- # return 0 00:07:45.613 19:38:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:45.613 19:38:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:45.613 19:38:26 -- common/autotest_common.sh@10 -- # set +x 00:07:45.613 19:38:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.613 19:38:26 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:45.613 19:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.613 19:38:26 -- common/autotest_common.sh@10 -- # set +x 00:07:45.613 19:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.613 19:38:26 -- target/rpc.sh@26 -- # stats='{ 00:07:45.613 "tick_rate": 2700000000, 00:07:45.613 "poll_groups": [ 00:07:45.613 { 00:07:45.613 "name": "nvmf_tgt_poll_group_0", 00:07:45.613 "admin_qpairs": 0, 00:07:45.613 "io_qpairs": 0, 00:07:45.613 "current_admin_qpairs": 0, 00:07:45.613 "current_io_qpairs": 0, 00:07:45.613 "pending_bdev_io": 0, 00:07:45.613 "completed_nvme_io": 0, 00:07:45.613 "transports": [] 00:07:45.613 }, 00:07:45.613 { 00:07:45.613 "name": "nvmf_tgt_poll_group_1", 00:07:45.613 "admin_qpairs": 0, 00:07:45.613 "io_qpairs": 0, 00:07:45.613 "current_admin_qpairs": 0, 00:07:45.613 "current_io_qpairs": 0, 00:07:45.613 "pending_bdev_io": 0, 00:07:45.613 "completed_nvme_io": 0, 00:07:45.613 "transports": [] 00:07:45.613 }, 00:07:45.613 { 00:07:45.613 "name": "nvmf_tgt_poll_group_2", 00:07:45.613 "admin_qpairs": 0, 00:07:45.613 "io_qpairs": 0, 00:07:45.613 "current_admin_qpairs": 0, 00:07:45.613 "current_io_qpairs": 0, 00:07:45.613 "pending_bdev_io": 0, 00:07:45.613 "completed_nvme_io": 0, 00:07:45.613 "transports": [] 00:07:45.613 }, 00:07:45.613 { 00:07:45.613 "name": "nvmf_tgt_poll_group_3", 00:07:45.613 "admin_qpairs": 0, 00:07:45.613 "io_qpairs": 0, 00:07:45.613 "current_admin_qpairs": 0, 00:07:45.613 "current_io_qpairs": 0, 00:07:45.613 "pending_bdev_io": 0, 00:07:45.613 "completed_nvme_io": 0, 00:07:45.613 "transports": [] 00:07:45.613 } 00:07:45.613 ] 00:07:45.613 }' 00:07:45.613 19:38:26 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:45.613 19:38:26 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:45.613 19:38:26 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:45.613 19:38:26 -- target/rpc.sh@15 -- # wc -l 00:07:45.613 19:38:26 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:45.613 19:38:26 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:45.613 19:38:26 -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:45.613 19:38:26 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:45.613 19:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.613 19:38:26 -- common/autotest_common.sh@10 -- # set +x 00:07:45.613 [2024-04-24 19:38:26.965986] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.613 19:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.613 19:38:26 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:45.613 19:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.613 19:38:26 -- common/autotest_common.sh@10 -- # set +x 00:07:45.613 19:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.613 19:38:26 -- target/rpc.sh@33 -- # stats='{ 00:07:45.613 "tick_rate": 2700000000, 00:07:45.613 "poll_groups": [ 00:07:45.613 { 00:07:45.613 "name": "nvmf_tgt_poll_group_0", 00:07:45.613 "admin_qpairs": 0, 00:07:45.613 "io_qpairs": 0, 00:07:45.613 "current_admin_qpairs": 0, 00:07:45.613 "current_io_qpairs": 0, 00:07:45.613 "pending_bdev_io": 0, 00:07:45.613 "completed_nvme_io": 0, 00:07:45.613 "transports": [ 00:07:45.613 { 00:07:45.613 "trtype": "TCP" 00:07:45.613 } 00:07:45.613 ] 00:07:45.613 }, 00:07:45.613 { 00:07:45.613 "name": "nvmf_tgt_poll_group_1", 00:07:45.613 "admin_qpairs": 0, 00:07:45.613 "io_qpairs": 0, 00:07:45.613 "current_admin_qpairs": 0, 00:07:45.613 "current_io_qpairs": 0, 00:07:45.613 "pending_bdev_io": 0, 00:07:45.614 "completed_nvme_io": 0, 00:07:45.614 "transports": [ 00:07:45.614 { 00:07:45.614 "trtype": "TCP" 00:07:45.614 } 00:07:45.614 ] 00:07:45.614 }, 00:07:45.614 { 00:07:45.614 "name": "nvmf_tgt_poll_group_2", 00:07:45.614 "admin_qpairs": 0, 00:07:45.614 "io_qpairs": 0, 00:07:45.614 "current_admin_qpairs": 0, 00:07:45.614 "current_io_qpairs": 0, 00:07:45.614 "pending_bdev_io": 0, 00:07:45.614 "completed_nvme_io": 0, 00:07:45.614 "transports": [ 00:07:45.614 { 00:07:45.614 "trtype": "TCP" 00:07:45.614 } 00:07:45.614 ] 00:07:45.614 }, 00:07:45.614 { 00:07:45.614 "name": "nvmf_tgt_poll_group_3", 00:07:45.614 "admin_qpairs": 0, 00:07:45.614 "io_qpairs": 0, 00:07:45.614 "current_admin_qpairs": 0, 00:07:45.614 "current_io_qpairs": 0, 00:07:45.614 "pending_bdev_io": 0, 00:07:45.614 "completed_nvme_io": 0, 00:07:45.614 "transports": [ 00:07:45.614 { 00:07:45.614 "trtype": "TCP" 00:07:45.614 } 00:07:45.614 ] 00:07:45.614 } 00:07:45.614 ] 00:07:45.614 }' 00:07:45.614 19:38:26 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:45.614 19:38:26 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:45.614 19:38:26 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:45.614 19:38:26 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:45.614 19:38:27 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:45.614 19:38:27 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:45.614 19:38:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:45.614 19:38:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:45.614 19:38:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:45.614 19:38:27 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:45.614 19:38:27 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:45.614 19:38:27 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:45.614 19:38:27 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:45.614 19:38:27 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:45.614 19:38:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.614 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.614 Malloc1 00:07:45.614 19:38:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.614 19:38:27 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:45.614 19:38:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.614 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.614 19:38:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.614 19:38:27 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.614 19:38:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.614 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.614 19:38:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.614 19:38:27 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:45.614 19:38:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.614 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.614 19:38:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.614 19:38:27 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.614 19:38:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.614 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.614 [2024-04-24 19:38:27.121354] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.614 19:38:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.614 19:38:27 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:45.614 19:38:27 -- common/autotest_common.sh@638 -- # local es=0 00:07:45.614 19:38:27 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:45.614 19:38:27 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:45.614 19:38:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:45.614 19:38:27 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:45.872 19:38:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:45.872 19:38:27 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:45.872 19:38:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:45.872 19:38:27 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:45.872 19:38:27 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:45.872 19:38:27 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:07:45.872 [2024-04-24 19:38:27.143842] ctrlr.c: 778:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:45.872 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:45.872 could not add new controller: failed to write to nvme-fabrics device 00:07:45.872 19:38:27 -- common/autotest_common.sh@641 -- # es=1 00:07:45.872 19:38:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:45.872 19:38:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:45.872 19:38:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:45.872 19:38:27 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.872 19:38:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.872 19:38:27 -- common/autotest_common.sh@10 -- # set +x 00:07:45.872 19:38:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.872 19:38:27 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:46.467 19:38:27 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:46.467 19:38:27 -- common/autotest_common.sh@1184 -- # local i=0 00:07:46.467 19:38:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:46.467 19:38:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:46.467 19:38:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:48.364 19:38:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:48.364 19:38:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:48.364 19:38:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:48.364 19:38:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:48.364 19:38:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:48.364 19:38:29 -- common/autotest_common.sh@1194 -- # return 0 00:07:48.364 19:38:29 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.364 19:38:29 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.364 19:38:29 -- common/autotest_common.sh@1205 -- # local i=0 00:07:48.364 19:38:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:48.364 19:38:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.364 19:38:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:48.364 19:38:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.364 19:38:29 -- common/autotest_common.sh@1217 -- # return 0 00:07:48.364 19:38:29 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.364 19:38:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.364 19:38:29 -- common/autotest_common.sh@10 -- # set +x 00:07:48.364 19:38:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.364 19:38:29 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.364 19:38:29 -- common/autotest_common.sh@638 -- # local es=0 00:07:48.364 19:38:29 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.364 19:38:29 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:48.364 19:38:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:48.364 19:38:29 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:48.364 19:38:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:48.364 19:38:29 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:48.364 19:38:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:48.364 19:38:29 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:48.364 19:38:29 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:48.364 19:38:29 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.622 [2024-04-24 19:38:29.886271] ctrlr.c: 778:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:48.622 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:48.622 could not add new controller: failed to write to nvme-fabrics device 00:07:48.622 19:38:29 -- common/autotest_common.sh@641 -- # es=1 00:07:48.622 19:38:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:48.622 19:38:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:48.622 19:38:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:48.622 19:38:29 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:48.622 19:38:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.622 19:38:29 -- common/autotest_common.sh@10 -- # set +x 00:07:48.622 19:38:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.622 19:38:29 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:49.187 19:38:30 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:49.187 19:38:30 -- common/autotest_common.sh@1184 -- # local i=0 00:07:49.187 19:38:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:49.187 19:38:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:49.187 19:38:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:51.085 19:38:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:51.085 19:38:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:51.085 19:38:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:51.085 19:38:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:51.085 19:38:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:51.085 19:38:32 -- common/autotest_common.sh@1194 -- # return 0 00:07:51.085 19:38:32 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.344 19:38:32 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.344 19:38:32 -- common/autotest_common.sh@1205 -- # local i=0 00:07:51.344 19:38:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:51.344 19:38:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.344 19:38:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:51.344 19:38:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.344 19:38:32 -- common/autotest_common.sh@1217 -- # return 0 00:07:51.344 19:38:32 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.344 19:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.344 19:38:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 19:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.344 19:38:32 -- target/rpc.sh@81 -- # seq 1 5 00:07:51.344 19:38:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:51.344 19:38:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:51.344 19:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.344 19:38:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 19:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.344 19:38:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.344 19:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.344 19:38:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 [2024-04-24 19:38:32.684085] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.344 19:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.344 19:38:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:51.344 19:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.344 19:38:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 19:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.344 19:38:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:51.344 19:38:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:51.344 19:38:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 19:38:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:51.344 19:38:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:51.910 19:38:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:51.910 19:38:33 -- common/autotest_common.sh@1184 -- # local i=0 00:07:51.910 19:38:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:51.910 19:38:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:51.910 19:38:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:54.437 19:38:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:54.437 19:38:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:54.437 19:38:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:54.437 19:38:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:54.437 19:38:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:54.437 19:38:35 -- common/autotest_common.sh@1194 -- # return 0 00:07:54.437 19:38:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:54.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.437 19:38:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:54.437 19:38:35 -- common/autotest_common.sh@1205 -- # local i=0 00:07:54.437 19:38:35 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:54.437 19:38:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.437 19:38:35 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:54.437 19:38:35 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:54.437 19:38:35 -- common/autotest_common.sh@1217 -- # return 0 00:07:54.437 19:38:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.437 19:38:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.437 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 19:38:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.437 19:38:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.437 19:38:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.437 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 19:38:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.437 19:38:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:54.437 19:38:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:54.437 19:38:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.437 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 19:38:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.437 19:38:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.437 19:38:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.437 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 [2024-04-24 19:38:35.509249] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.437 19:38:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.437 19:38:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:54.437 19:38:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.437 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 19:38:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.437 19:38:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:54.437 19:38:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.437 19:38:35 -- common/autotest_common.sh@10 -- # set +x 00:07:54.437 19:38:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.437 19:38:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.694 19:38:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:54.694 19:38:36 -- common/autotest_common.sh@1184 -- # local i=0 00:07:54.694 19:38:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:54.694 19:38:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:54.694 19:38:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:57.221 19:38:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:57.221 19:38:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:57.221 19:38:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.221 19:38:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:57.221 19:38:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.221 19:38:38 -- common/autotest_common.sh@1194 -- # return 0 00:07:57.221 19:38:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:57.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.221 19:38:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:57.221 19:38:38 -- common/autotest_common.sh@1205 -- # local i=0 00:07:57.221 19:38:38 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:57.221 19:38:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.221 19:38:38 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:57.221 19:38:38 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:57.221 19:38:38 -- common/autotest_common.sh@1217 -- # return 0 00:07:57.221 19:38:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.221 19:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.221 19:38:38 -- common/autotest_common.sh@10 -- # set +x 00:07:57.221 19:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.221 19:38:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:57.221 19:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.221 19:38:38 -- common/autotest_common.sh@10 -- # set +x 00:07:57.221 19:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.221 19:38:38 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:57.221 19:38:38 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:57.221 19:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.221 19:38:38 -- common/autotest_common.sh@10 -- # set +x 00:07:57.221 19:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.221 19:38:38 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.221 19:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.221 19:38:38 -- common/autotest_common.sh@10 -- # set +x 00:07:57.221 [2024-04-24 19:38:38.232762] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.221 19:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.221 19:38:38 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:57.221 19:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.221 19:38:38 -- common/autotest_common.sh@10 -- # set +x 00:07:57.221 19:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.221 19:38:38 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:57.221 19:38:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.221 19:38:38 -- common/autotest_common.sh@10 -- # set +x 00:07:57.221 19:38:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.221 19:38:38 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:57.477 19:38:38 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.477 19:38:38 -- common/autotest_common.sh@1184 -- # local i=0 00:07:57.477 19:38:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.478 19:38:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:57.478 19:38:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:00.002 19:38:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:00.002 19:38:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:00.002 19:38:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.002 19:38:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:00.002 19:38:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.002 19:38:40 -- common/autotest_common.sh@1194 -- # return 0 00:08:00.002 19:38:40 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:00.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.002 19:38:41 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:00.002 19:38:41 -- common/autotest_common.sh@1205 -- # local i=0 00:08:00.002 19:38:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:00.002 19:38:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.002 19:38:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:00.002 19:38:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:00.002 19:38:41 -- common/autotest_common.sh@1217 -- # return 0 00:08:00.002 19:38:41 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:00.002 19:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.002 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 19:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.002 19:38:41 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:00.002 19:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.002 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 19:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.002 19:38:41 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:00.002 19:38:41 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:00.002 19:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.002 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 19:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.002 19:38:41 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.002 19:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.002 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 [2024-04-24 19:38:41.057188] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.002 19:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.002 19:38:41 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:00.002 19:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.002 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 19:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.002 19:38:41 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:00.002 19:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:00.002 19:38:41 -- common/autotest_common.sh@10 -- # set +x 00:08:00.002 19:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:00.002 19:38:41 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.259 19:38:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.259 19:38:41 -- common/autotest_common.sh@1184 -- # local i=0 00:08:00.259 19:38:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.259 19:38:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:00.259 19:38:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:02.785 19:38:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:02.785 19:38:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:02.785 19:38:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.785 19:38:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:02.785 19:38:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.785 19:38:43 -- common/autotest_common.sh@1194 -- # return 0 00:08:02.785 19:38:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.785 19:38:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.785 19:38:43 -- common/autotest_common.sh@1205 -- # local i=0 00:08:02.785 19:38:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:02.785 19:38:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.785 19:38:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:02.785 19:38:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.785 19:38:43 -- common/autotest_common.sh@1217 -- # return 0 00:08:02.785 19:38:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.785 19:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.785 19:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:02.785 19:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.785 19:38:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.785 19:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.785 19:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:02.785 19:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.785 19:38:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:02.785 19:38:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:02.785 19:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.785 19:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:02.785 19:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.785 19:38:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.785 19:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.785 19:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:02.785 [2024-04-24 19:38:43.852879] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.785 19:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.785 19:38:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:02.785 19:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.785 19:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:02.785 19:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.785 19:38:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:02.785 19:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:02.785 19:38:43 -- common/autotest_common.sh@10 -- # set +x 00:08:02.785 19:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:02.785 19:38:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.043 19:38:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.043 19:38:44 -- common/autotest_common.sh@1184 -- # local i=0 00:08:03.043 19:38:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.043 19:38:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:03.043 19:38:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:05.600 19:38:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:05.600 19:38:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:05.600 19:38:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.600 19:38:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:05.600 19:38:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.600 19:38:46 -- common/autotest_common.sh@1194 -- # return 0 00:08:05.600 19:38:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:05.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.600 19:38:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:05.600 19:38:46 -- common/autotest_common.sh@1205 -- # local i=0 00:08:05.600 19:38:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:05.600 19:38:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.600 19:38:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:05.600 19:38:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.600 19:38:46 -- common/autotest_common.sh@1217 -- # return 0 00:08:05.600 19:38:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.600 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.600 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.600 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.600 19:38:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.600 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.600 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.600 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.600 19:38:46 -- target/rpc.sh@99 -- # seq 1 5 00:08:05.600 19:38:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.600 19:38:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.600 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.600 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.600 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.600 19:38:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.600 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.600 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.600 [2024-04-24 19:38:46.675406] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.600 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.600 19:38:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.600 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.600 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.600 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.600 19:38:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.600 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.600 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.600 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.600 19:38:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.600 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.600 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.601 19:38:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 [2024-04-24 19:38:46.723476] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.601 19:38:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 [2024-04-24 19:38:46.771662] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.601 19:38:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 [2024-04-24 19:38:46.819821] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:05.601 19:38:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 [2024-04-24 19:38:46.867993] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:05.601 19:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:05.601 19:38:46 -- common/autotest_common.sh@10 -- # set +x 00:08:05.601 19:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:05.601 19:38:46 -- target/rpc.sh@110 -- # stats='{ 00:08:05.601 "tick_rate": 2700000000, 00:08:05.601 "poll_groups": [ 00:08:05.601 { 00:08:05.601 "name": "nvmf_tgt_poll_group_0", 00:08:05.601 "admin_qpairs": 2, 00:08:05.601 "io_qpairs": 84, 00:08:05.601 "current_admin_qpairs": 0, 00:08:05.601 "current_io_qpairs": 0, 00:08:05.601 "pending_bdev_io": 0, 00:08:05.601 "completed_nvme_io": 135, 00:08:05.601 "transports": [ 00:08:05.601 { 00:08:05.601 "trtype": "TCP" 00:08:05.601 } 00:08:05.601 ] 00:08:05.601 }, 00:08:05.601 { 00:08:05.601 "name": "nvmf_tgt_poll_group_1", 00:08:05.601 "admin_qpairs": 2, 00:08:05.601 "io_qpairs": 84, 00:08:05.601 "current_admin_qpairs": 0, 00:08:05.601 "current_io_qpairs": 0, 00:08:05.601 "pending_bdev_io": 0, 00:08:05.601 "completed_nvme_io": 135, 00:08:05.601 "transports": [ 00:08:05.601 { 00:08:05.601 "trtype": "TCP" 00:08:05.601 } 00:08:05.601 ] 00:08:05.601 }, 00:08:05.601 { 00:08:05.601 "name": "nvmf_tgt_poll_group_2", 00:08:05.601 "admin_qpairs": 1, 00:08:05.601 "io_qpairs": 84, 00:08:05.601 "current_admin_qpairs": 0, 00:08:05.601 "current_io_qpairs": 0, 00:08:05.601 "pending_bdev_io": 0, 00:08:05.601 "completed_nvme_io": 183, 00:08:05.601 "transports": [ 00:08:05.601 { 00:08:05.601 "trtype": "TCP" 00:08:05.601 } 00:08:05.601 ] 00:08:05.601 }, 00:08:05.601 { 00:08:05.601 "name": "nvmf_tgt_poll_group_3", 00:08:05.601 "admin_qpairs": 2, 00:08:05.601 "io_qpairs": 84, 00:08:05.601 "current_admin_qpairs": 0, 00:08:05.601 "current_io_qpairs": 0, 00:08:05.601 "pending_bdev_io": 0, 00:08:05.601 "completed_nvme_io": 233, 00:08:05.601 "transports": [ 00:08:05.601 { 00:08:05.601 "trtype": "TCP" 00:08:05.601 } 00:08:05.601 ] 00:08:05.601 } 00:08:05.601 ] 00:08:05.601 }' 00:08:05.601 19:38:46 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:05.601 19:38:46 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:05.601 19:38:46 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:05.602 19:38:46 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:05.602 19:38:46 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:05.602 19:38:46 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:05.602 19:38:46 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:05.602 19:38:46 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:05.602 19:38:46 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:05.602 19:38:46 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:08:05.602 19:38:46 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:05.602 19:38:46 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:05.602 19:38:46 -- target/rpc.sh@123 -- # nvmftestfini 00:08:05.602 19:38:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:05.602 19:38:47 -- nvmf/common.sh@117 -- # sync 00:08:05.602 19:38:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:05.602 19:38:47 -- nvmf/common.sh@120 -- # set +e 00:08:05.602 19:38:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:05.602 19:38:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:05.602 rmmod nvme_tcp 00:08:05.602 rmmod nvme_fabrics 00:08:05.602 rmmod nvme_keyring 00:08:05.602 19:38:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.602 19:38:47 -- nvmf/common.sh@124 -- # set -e 00:08:05.602 19:38:47 -- nvmf/common.sh@125 -- # return 0 00:08:05.602 19:38:47 -- nvmf/common.sh@478 -- # '[' -n 1617667 ']' 00:08:05.602 19:38:47 -- nvmf/common.sh@479 -- # killprocess 1617667 00:08:05.602 19:38:47 -- common/autotest_common.sh@936 -- # '[' -z 1617667 ']' 00:08:05.602 19:38:47 -- common/autotest_common.sh@940 -- # kill -0 1617667 00:08:05.602 19:38:47 -- common/autotest_common.sh@941 -- # uname 00:08:05.602 19:38:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:05.602 19:38:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1617667 00:08:05.602 19:38:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:05.602 19:38:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:05.602 19:38:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1617667' 00:08:05.602 killing process with pid 1617667 00:08:05.602 19:38:47 -- common/autotest_common.sh@955 -- # kill 1617667 00:08:05.602 19:38:47 -- common/autotest_common.sh@960 -- # wait 1617667 00:08:06.170 19:38:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:06.170 19:38:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:06.170 19:38:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:06.170 19:38:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:06.170 19:38:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:06.170 19:38:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.170 19:38:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:06.170 19:38:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.073 19:38:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:08.073 00:08:08.073 real 0m25.912s 00:08:08.073 user 1m24.427s 00:08:08.073 sys 0m4.211s 00:08:08.073 19:38:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.073 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:08.073 ************************************ 00:08:08.073 END TEST nvmf_rpc 00:08:08.073 ************************************ 00:08:08.073 19:38:49 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:08.073 19:38:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:08.073 19:38:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.073 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:08.073 ************************************ 00:08:08.073 START TEST nvmf_invalid 00:08:08.073 ************************************ 00:08:08.073 19:38:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:08.332 * Looking for test storage... 00:08:08.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.332 19:38:49 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.332 19:38:49 -- nvmf/common.sh@7 -- # uname -s 00:08:08.332 19:38:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.332 19:38:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.332 19:38:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.332 19:38:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.332 19:38:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.332 19:38:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.332 19:38:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.332 19:38:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.332 19:38:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.332 19:38:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.332 19:38:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.332 19:38:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:08.332 19:38:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.332 19:38:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.332 19:38:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.332 19:38:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.332 19:38:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.332 19:38:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.332 19:38:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.332 19:38:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.332 19:38:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.332 19:38:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.332 19:38:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.332 19:38:49 -- paths/export.sh@5 -- # export PATH 00:08:08.332 19:38:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.332 19:38:49 -- nvmf/common.sh@47 -- # : 0 00:08:08.332 19:38:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.332 19:38:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.332 19:38:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.332 19:38:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.332 19:38:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.332 19:38:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.332 19:38:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.332 19:38:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.332 19:38:49 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:08.332 19:38:49 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:08.332 19:38:49 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:08.332 19:38:49 -- target/invalid.sh@14 -- # target=foobar 00:08:08.332 19:38:49 -- target/invalid.sh@16 -- # RANDOM=0 00:08:08.333 19:38:49 -- target/invalid.sh@34 -- # nvmftestinit 00:08:08.333 19:38:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:08.333 19:38:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.333 19:38:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:08.333 19:38:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:08.333 19:38:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:08.333 19:38:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.333 19:38:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.333 19:38:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.333 19:38:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:08.333 19:38:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:08.333 19:38:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.333 19:38:49 -- common/autotest_common.sh@10 -- # set +x 00:08:10.235 19:38:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:10.235 19:38:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.235 19:38:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.235 19:38:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.235 19:38:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.235 19:38:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.235 19:38:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.235 19:38:51 -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.235 19:38:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.235 19:38:51 -- nvmf/common.sh@296 -- # e810=() 00:08:10.235 19:38:51 -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.235 19:38:51 -- nvmf/common.sh@297 -- # x722=() 00:08:10.235 19:38:51 -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.235 19:38:51 -- nvmf/common.sh@298 -- # mlx=() 00:08:10.235 19:38:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.235 19:38:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.235 19:38:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.235 19:38:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.235 19:38:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.235 19:38:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.235 19:38:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.235 19:38:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.236 19:38:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.236 19:38:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:10.236 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:10.236 19:38:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.236 19:38:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:10.236 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:10.236 19:38:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.236 19:38:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.236 19:38:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.236 19:38:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:10.236 19:38:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.236 19:38:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:10.236 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:10.236 19:38:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.236 19:38:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.236 19:38:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.236 19:38:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:10.236 19:38:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.236 19:38:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:10.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:10.236 19:38:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.236 19:38:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:10.236 19:38:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:10.236 19:38:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:10.236 19:38:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:10.236 19:38:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.236 19:38:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.236 19:38:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.236 19:38:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.236 19:38:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.236 19:38:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.236 19:38:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.236 19:38:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.236 19:38:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.236 19:38:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.236 19:38:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.236 19:38:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.236 19:38:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.236 19:38:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.236 19:38:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.236 19:38:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.236 19:38:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.236 19:38:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.236 19:38:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.500 19:38:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:08:10.500 00:08:10.500 --- 10.0.0.2 ping statistics --- 00:08:10.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.500 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:08:10.500 19:38:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:08:10.500 00:08:10.500 --- 10.0.0.1 ping statistics --- 00:08:10.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.500 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:10.500 19:38:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.500 19:38:51 -- nvmf/common.sh@411 -- # return 0 00:08:10.500 19:38:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:10.500 19:38:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.500 19:38:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:10.500 19:38:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:10.500 19:38:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.500 19:38:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:10.500 19:38:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:10.500 19:38:51 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:10.500 19:38:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:10.500 19:38:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:10.500 19:38:51 -- common/autotest_common.sh@10 -- # set +x 00:08:10.500 19:38:51 -- nvmf/common.sh@470 -- # nvmfpid=1622305 00:08:10.500 19:38:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.500 19:38:51 -- nvmf/common.sh@471 -- # waitforlisten 1622305 00:08:10.500 19:38:51 -- common/autotest_common.sh@817 -- # '[' -z 1622305 ']' 00:08:10.500 19:38:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.500 19:38:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:10.500 19:38:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.500 19:38:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:10.500 19:38:51 -- common/autotest_common.sh@10 -- # set +x 00:08:10.500 [2024-04-24 19:38:51.835924] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:08:10.500 [2024-04-24 19:38:51.835998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.500 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.500 [2024-04-24 19:38:51.907988] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.759 [2024-04-24 19:38:52.028567] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.759 [2024-04-24 19:38:52.028624] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.759 [2024-04-24 19:38:52.028649] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.759 [2024-04-24 19:38:52.028664] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.759 [2024-04-24 19:38:52.028676] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.759 [2024-04-24 19:38:52.028745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.759 [2024-04-24 19:38:52.028796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.759 [2024-04-24 19:38:52.028913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.759 [2024-04-24 19:38:52.028915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.323 19:38:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:11.323 19:38:52 -- common/autotest_common.sh@850 -- # return 0 00:08:11.323 19:38:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:11.323 19:38:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:11.323 19:38:52 -- common/autotest_common.sh@10 -- # set +x 00:08:11.581 19:38:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.581 19:38:52 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:11.581 19:38:52 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20372 00:08:11.838 [2024-04-24 19:38:53.104463] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:11.838 19:38:53 -- target/invalid.sh@40 -- # out='request: 00:08:11.838 { 00:08:11.838 "nqn": "nqn.2016-06.io.spdk:cnode20372", 00:08:11.838 "tgt_name": "foobar", 00:08:11.838 "method": "nvmf_create_subsystem", 00:08:11.838 "req_id": 1 00:08:11.838 } 00:08:11.838 Got JSON-RPC error response 00:08:11.838 response: 00:08:11.838 { 00:08:11.838 "code": -32603, 00:08:11.838 "message": "Unable to find target foobar" 00:08:11.838 }' 00:08:11.838 19:38:53 -- target/invalid.sh@41 -- # [[ request: 00:08:11.838 { 00:08:11.838 "nqn": "nqn.2016-06.io.spdk:cnode20372", 00:08:11.838 "tgt_name": "foobar", 00:08:11.838 "method": "nvmf_create_subsystem", 00:08:11.838 "req_id": 1 00:08:11.838 } 00:08:11.839 Got JSON-RPC error response 00:08:11.839 response: 00:08:11.839 { 00:08:11.839 "code": -32603, 00:08:11.839 "message": "Unable to find target foobar" 00:08:11.839 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:11.839 19:38:53 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:11.839 19:38:53 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15737 00:08:11.839 [2024-04-24 19:38:53.345272] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15737: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:12.096 19:38:53 -- target/invalid.sh@45 -- # out='request: 00:08:12.096 { 00:08:12.096 "nqn": "nqn.2016-06.io.spdk:cnode15737", 00:08:12.096 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:12.096 "method": "nvmf_create_subsystem", 00:08:12.096 "req_id": 1 00:08:12.096 } 00:08:12.096 Got JSON-RPC error response 00:08:12.096 response: 00:08:12.096 { 00:08:12.096 "code": -32602, 00:08:12.096 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:12.096 }' 00:08:12.096 19:38:53 -- target/invalid.sh@46 -- # [[ request: 00:08:12.096 { 00:08:12.096 "nqn": "nqn.2016-06.io.spdk:cnode15737", 00:08:12.096 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:12.096 "method": "nvmf_create_subsystem", 00:08:12.096 "req_id": 1 00:08:12.096 } 00:08:12.096 Got JSON-RPC error response 00:08:12.096 response: 00:08:12.096 { 00:08:12.096 "code": -32602, 00:08:12.096 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:12.096 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:12.096 19:38:53 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:12.097 19:38:53 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29599 00:08:12.097 [2024-04-24 19:38:53.590036] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29599: invalid model number 'SPDK_Controller' 00:08:12.354 19:38:53 -- target/invalid.sh@50 -- # out='request: 00:08:12.354 { 00:08:12.354 "nqn": "nqn.2016-06.io.spdk:cnode29599", 00:08:12.354 "model_number": "SPDK_Controller\u001f", 00:08:12.354 "method": "nvmf_create_subsystem", 00:08:12.354 "req_id": 1 00:08:12.354 } 00:08:12.354 Got JSON-RPC error response 00:08:12.354 response: 00:08:12.354 { 00:08:12.354 "code": -32602, 00:08:12.354 "message": "Invalid MN SPDK_Controller\u001f" 00:08:12.354 }' 00:08:12.354 19:38:53 -- target/invalid.sh@51 -- # [[ request: 00:08:12.354 { 00:08:12.354 "nqn": "nqn.2016-06.io.spdk:cnode29599", 00:08:12.354 "model_number": "SPDK_Controller\u001f", 00:08:12.354 "method": "nvmf_create_subsystem", 00:08:12.354 "req_id": 1 00:08:12.354 } 00:08:12.354 Got JSON-RPC error response 00:08:12.354 response: 00:08:12.354 { 00:08:12.354 "code": -32602, 00:08:12.354 "message": "Invalid MN SPDK_Controller\u001f" 00:08:12.354 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:12.354 19:38:53 -- target/invalid.sh@54 -- # gen_random_s 21 00:08:12.354 19:38:53 -- target/invalid.sh@19 -- # local length=21 ll 00:08:12.354 19:38:53 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:12.354 19:38:53 -- target/invalid.sh@21 -- # local chars 00:08:12.354 19:38:53 -- target/invalid.sh@22 -- # local string 00:08:12.354 19:38:53 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:12.354 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.354 19:38:53 -- target/invalid.sh@25 -- # printf %x 38 00:08:12.354 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:12.354 19:38:53 -- target/invalid.sh@25 -- # string+='&' 00:08:12.354 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.354 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.354 19:38:53 -- target/invalid.sh@25 -- # printf %x 101 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=e 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 35 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+='#' 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 98 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=b 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 38 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+='&' 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 44 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=, 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 109 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=m 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 104 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=h 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 110 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=n 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 75 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=K 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 60 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+='<' 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 107 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=k 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 126 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+='~' 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 54 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=6 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 122 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=z 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 117 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=u 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 122 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=z 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 68 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=D 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 73 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=I 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 97 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x61' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=a 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # printf %x 64 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:12.355 19:38:53 -- target/invalid.sh@25 -- # string+=@ 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:12.355 19:38:53 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:12.355 19:38:53 -- target/invalid.sh@28 -- # [[ & == \- ]] 00:08:12.355 19:38:53 -- target/invalid.sh@31 -- # echo '&e#b&,mhnK2c+_sIe$}Q;)XT2c+_sIe$}Q;)XT2c+_sIe$}Q;)XT2c+_sIe$}Q;)XT2c+_sIe$}Q;)XT2c+_sIe$}Q;)XT2c+_sIe$}Q;)XT /dev/null' 00:08:15.499 19:38:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.034 19:38:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.034 00:08:18.034 real 0m9.355s 00:08:18.034 user 0m23.184s 00:08:18.034 sys 0m2.477s 00:08:18.034 19:38:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:18.034 19:38:58 -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 ************************************ 00:08:18.034 END TEST nvmf_invalid 00:08:18.034 ************************************ 00:08:18.034 19:38:58 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:18.034 19:38:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:18.034 19:38:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.034 19:38:58 -- common/autotest_common.sh@10 -- # set +x 00:08:18.034 ************************************ 00:08:18.034 START TEST nvmf_abort 00:08:18.034 ************************************ 00:08:18.034 19:38:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:18.034 * Looking for test storage... 00:08:18.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.034 19:38:59 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.034 19:38:59 -- nvmf/common.sh@7 -- # uname -s 00:08:18.034 19:38:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.034 19:38:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.034 19:38:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.034 19:38:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.034 19:38:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.034 19:38:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.034 19:38:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.034 19:38:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.034 19:38:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.034 19:38:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.034 19:38:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.034 19:38:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:18.034 19:38:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.034 19:38:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.034 19:38:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.034 19:38:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.034 19:38:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.034 19:38:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.034 19:38:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.034 19:38:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.034 19:38:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.034 19:38:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.034 19:38:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.034 19:38:59 -- paths/export.sh@5 -- # export PATH 00:08:18.034 19:38:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.034 19:38:59 -- nvmf/common.sh@47 -- # : 0 00:08:18.034 19:38:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.034 19:38:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.034 19:38:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.034 19:38:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.034 19:38:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.034 19:38:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.034 19:38:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.034 19:38:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.034 19:38:59 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.034 19:38:59 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:18.034 19:38:59 -- target/abort.sh@14 -- # nvmftestinit 00:08:18.034 19:38:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:18.034 19:38:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.034 19:38:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:18.034 19:38:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:18.034 19:38:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:18.034 19:38:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.034 19:38:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.034 19:38:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.034 19:38:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:18.034 19:38:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:18.034 19:38:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.034 19:38:59 -- common/autotest_common.sh@10 -- # set +x 00:08:19.934 19:39:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:19.934 19:39:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:19.934 19:39:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:19.934 19:39:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:19.934 19:39:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:19.934 19:39:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:19.934 19:39:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:19.934 19:39:01 -- nvmf/common.sh@295 -- # net_devs=() 00:08:19.934 19:39:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:19.934 19:39:01 -- nvmf/common.sh@296 -- # e810=() 00:08:19.934 19:39:01 -- nvmf/common.sh@296 -- # local -ga e810 00:08:19.934 19:39:01 -- nvmf/common.sh@297 -- # x722=() 00:08:19.934 19:39:01 -- nvmf/common.sh@297 -- # local -ga x722 00:08:19.934 19:39:01 -- nvmf/common.sh@298 -- # mlx=() 00:08:19.934 19:39:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:19.934 19:39:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.934 19:39:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:19.934 19:39:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:19.934 19:39:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:19.934 19:39:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.934 19:39:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.934 19:39:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:19.934 19:39:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.934 19:39:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:19.934 19:39:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.934 19:39:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.934 19:39:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:19.934 19:39:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.934 19:39:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.934 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.934 19:39:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.934 19:39:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:19.934 19:39:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.934 19:39:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:19.934 19:39:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.934 19:39:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.934 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.934 19:39:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.934 19:39:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:19.934 19:39:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:19.934 19:39:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:19.934 19:39:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.934 19:39:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.934 19:39:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.934 19:39:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:19.934 19:39:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.934 19:39:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.934 19:39:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:19.934 19:39:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.934 19:39:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.934 19:39:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:19.934 19:39:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:19.934 19:39:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.934 19:39:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.934 19:39:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.934 19:39:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.934 19:39:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:19.934 19:39:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.934 19:39:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.934 19:39:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.934 19:39:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:19.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:19.934 00:08:19.934 --- 10.0.0.2 ping statistics --- 00:08:19.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.934 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:19.934 19:39:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:08:19.934 00:08:19.934 --- 10.0.0.1 ping statistics --- 00:08:19.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.934 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:19.934 19:39:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.934 19:39:01 -- nvmf/common.sh@411 -- # return 0 00:08:19.934 19:39:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:19.934 19:39:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.934 19:39:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:19.934 19:39:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.934 19:39:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:19.934 19:39:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:19.934 19:39:01 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:19.934 19:39:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:19.934 19:39:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:19.934 19:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:19.934 19:39:01 -- nvmf/common.sh@470 -- # nvmfpid=1625031 00:08:19.934 19:39:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:19.934 19:39:01 -- nvmf/common.sh@471 -- # waitforlisten 1625031 00:08:19.934 19:39:01 -- common/autotest_common.sh@817 -- # '[' -z 1625031 ']' 00:08:19.934 19:39:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.934 19:39:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:19.934 19:39:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.934 19:39:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:19.934 19:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:19.934 [2024-04-24 19:39:01.259577] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:08:19.934 [2024-04-24 19:39:01.259674] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.934 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.935 [2024-04-24 19:39:01.329706] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.192 [2024-04-24 19:39:01.448759] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.192 [2024-04-24 19:39:01.448817] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.192 [2024-04-24 19:39:01.448834] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.192 [2024-04-24 19:39:01.448847] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.192 [2024-04-24 19:39:01.448859] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.192 [2024-04-24 19:39:01.448985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.192 [2024-04-24 19:39:01.449069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.192 [2024-04-24 19:39:01.449072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.787 19:39:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:20.787 19:39:02 -- common/autotest_common.sh@850 -- # return 0 00:08:20.787 19:39:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:20.787 19:39:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:20.787 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:08:20.787 19:39:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.787 19:39:02 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:20.787 19:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.787 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:08:20.787 [2024-04-24 19:39:02.223928] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.787 19:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.787 19:39:02 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:20.787 19:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.787 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:08:20.787 Malloc0 00:08:20.787 19:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.787 19:39:02 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.787 19:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.787 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:08:20.787 Delay0 00:08:20.787 19:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.788 19:39:02 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:20.788 19:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.788 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:08:20.788 19:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.788 19:39:02 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:20.788 19:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.788 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:08:20.788 19:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.788 19:39:02 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:20.788 19:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.788 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:08:20.788 [2024-04-24 19:39:02.292670] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.788 19:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.788 19:39:02 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:20.788 19:39:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.788 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:08:21.045 19:39:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:21.046 19:39:02 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:21.046 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.046 [2024-04-24 19:39:02.398819] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:22.945 Initializing NVMe Controllers 00:08:22.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:22.945 controller IO queue size 128 less than required 00:08:22.945 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:22.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:22.945 Initialization complete. Launching workers. 00:08:22.945 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33456 00:08:22.945 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33517, failed to submit 62 00:08:22.945 success 33460, unsuccess 57, failed 0 00:08:22.945 19:39:04 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.945 19:39:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.945 19:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:23.203 19:39:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.203 19:39:04 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:23.203 19:39:04 -- target/abort.sh@38 -- # nvmftestfini 00:08:23.203 19:39:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:23.203 19:39:04 -- nvmf/common.sh@117 -- # sync 00:08:23.203 19:39:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.203 19:39:04 -- nvmf/common.sh@120 -- # set +e 00:08:23.203 19:39:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.203 19:39:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.203 rmmod nvme_tcp 00:08:23.203 rmmod nvme_fabrics 00:08:23.203 rmmod nvme_keyring 00:08:23.203 19:39:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.203 19:39:04 -- nvmf/common.sh@124 -- # set -e 00:08:23.203 19:39:04 -- nvmf/common.sh@125 -- # return 0 00:08:23.203 19:39:04 -- nvmf/common.sh@478 -- # '[' -n 1625031 ']' 00:08:23.203 19:39:04 -- nvmf/common.sh@479 -- # killprocess 1625031 00:08:23.203 19:39:04 -- common/autotest_common.sh@936 -- # '[' -z 1625031 ']' 00:08:23.203 19:39:04 -- common/autotest_common.sh@940 -- # kill -0 1625031 00:08:23.203 19:39:04 -- common/autotest_common.sh@941 -- # uname 00:08:23.203 19:39:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:23.203 19:39:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1625031 00:08:23.203 19:39:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:23.203 19:39:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:23.203 19:39:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1625031' 00:08:23.203 killing process with pid 1625031 00:08:23.203 19:39:04 -- common/autotest_common.sh@955 -- # kill 1625031 00:08:23.203 19:39:04 -- common/autotest_common.sh@960 -- # wait 1625031 00:08:23.462 19:39:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:23.462 19:39:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:23.462 19:39:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:23.462 19:39:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.462 19:39:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.462 19:39:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.462 19:39:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.462 19:39:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.368 19:39:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.368 00:08:25.368 real 0m7.798s 00:08:25.368 user 0m12.179s 00:08:25.368 sys 0m2.557s 00:08:25.368 19:39:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:25.368 19:39:06 -- common/autotest_common.sh@10 -- # set +x 00:08:25.368 ************************************ 00:08:25.368 END TEST nvmf_abort 00:08:25.369 ************************************ 00:08:25.627 19:39:06 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:25.627 19:39:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:25.627 19:39:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.627 19:39:06 -- common/autotest_common.sh@10 -- # set +x 00:08:25.627 ************************************ 00:08:25.627 START TEST nvmf_ns_hotplug_stress 00:08:25.628 ************************************ 00:08:25.628 19:39:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:25.628 * Looking for test storage... 00:08:25.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.628 19:39:07 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.628 19:39:07 -- nvmf/common.sh@7 -- # uname -s 00:08:25.628 19:39:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.628 19:39:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.628 19:39:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.628 19:39:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.628 19:39:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.628 19:39:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.628 19:39:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.628 19:39:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.628 19:39:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.628 19:39:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.628 19:39:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.628 19:39:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.628 19:39:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.628 19:39:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.628 19:39:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.628 19:39:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.628 19:39:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.628 19:39:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.628 19:39:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.628 19:39:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.628 19:39:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.628 19:39:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.628 19:39:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.628 19:39:07 -- paths/export.sh@5 -- # export PATH 00:08:25.628 19:39:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.628 19:39:07 -- nvmf/common.sh@47 -- # : 0 00:08:25.628 19:39:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.628 19:39:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.628 19:39:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.628 19:39:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.628 19:39:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.628 19:39:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.628 19:39:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.628 19:39:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.628 19:39:07 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.628 19:39:07 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:08:25.628 19:39:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:25.628 19:39:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.628 19:39:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:25.628 19:39:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:25.628 19:39:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:25.628 19:39:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.628 19:39:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.628 19:39:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.628 19:39:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:25.628 19:39:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:25.628 19:39:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.628 19:39:07 -- common/autotest_common.sh@10 -- # set +x 00:08:27.531 19:39:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:27.531 19:39:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.531 19:39:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.531 19:39:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.531 19:39:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.531 19:39:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.531 19:39:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.531 19:39:08 -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.531 19:39:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.531 19:39:08 -- nvmf/common.sh@296 -- # e810=() 00:08:27.531 19:39:08 -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.531 19:39:08 -- nvmf/common.sh@297 -- # x722=() 00:08:27.531 19:39:08 -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.531 19:39:08 -- nvmf/common.sh@298 -- # mlx=() 00:08:27.531 19:39:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.531 19:39:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.531 19:39:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.531 19:39:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.531 19:39:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.531 19:39:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.531 19:39:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.531 19:39:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.532 19:39:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.532 19:39:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.532 19:39:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.532 19:39:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.532 19:39:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.532 19:39:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.532 19:39:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.532 19:39:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.532 19:39:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:27.532 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:27.532 19:39:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.532 19:39:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:27.532 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:27.532 19:39:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.532 19:39:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.532 19:39:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.532 19:39:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:27.532 19:39:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.532 19:39:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:27.532 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:27.532 19:39:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.532 19:39:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.532 19:39:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.532 19:39:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:27.532 19:39:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.532 19:39:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:27.532 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:27.532 19:39:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.532 19:39:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:27.532 19:39:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:27.532 19:39:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:27.532 19:39:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:27.532 19:39:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.532 19:39:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.532 19:39:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.532 19:39:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.532 19:39:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.532 19:39:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.532 19:39:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.532 19:39:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.532 19:39:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.532 19:39:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.532 19:39:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.532 19:39:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.532 19:39:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.532 19:39:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.532 19:39:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.532 19:39:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.532 19:39:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.791 19:39:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.791 19:39:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.791 19:39:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:08:27.791 00:08:27.791 --- 10.0.0.2 ping statistics --- 00:08:27.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.791 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:27.791 19:39:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:08:27.791 00:08:27.791 --- 10.0.0.1 ping statistics --- 00:08:27.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.791 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:08:27.791 19:39:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.791 19:39:09 -- nvmf/common.sh@411 -- # return 0 00:08:27.791 19:39:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:27.791 19:39:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.791 19:39:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:27.791 19:39:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:27.791 19:39:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.791 19:39:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:27.791 19:39:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:27.791 19:39:09 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:08:27.791 19:39:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:27.791 19:39:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:27.791 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:08:27.791 19:39:09 -- nvmf/common.sh@470 -- # nvmfpid=1627926 00:08:27.791 19:39:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:27.791 19:39:09 -- nvmf/common.sh@471 -- # waitforlisten 1627926 00:08:27.791 19:39:09 -- common/autotest_common.sh@817 -- # '[' -z 1627926 ']' 00:08:27.791 19:39:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.791 19:39:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:27.791 19:39:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.791 19:39:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:27.791 19:39:09 -- common/autotest_common.sh@10 -- # set +x 00:08:27.791 [2024-04-24 19:39:09.166297] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:08:27.791 [2024-04-24 19:39:09.166376] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.791 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.791 [2024-04-24 19:39:09.233043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.049 [2024-04-24 19:39:09.345791] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.049 [2024-04-24 19:39:09.345840] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.049 [2024-04-24 19:39:09.345870] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.049 [2024-04-24 19:39:09.345882] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.049 [2024-04-24 19:39:09.345898] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.049 [2024-04-24 19:39:09.345992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.049 [2024-04-24 19:39:09.346059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.049 [2024-04-24 19:39:09.346055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.615 19:39:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:28.615 19:39:10 -- common/autotest_common.sh@850 -- # return 0 00:08:28.615 19:39:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:28.615 19:39:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:28.615 19:39:10 -- common/autotest_common.sh@10 -- # set +x 00:08:28.615 19:39:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.615 19:39:10 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:08:28.615 19:39:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:28.873 [2024-04-24 19:39:10.381059] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.130 19:39:10 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:29.387 19:39:10 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.644 [2024-04-24 19:39:10.904247] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.644 19:39:10 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.902 19:39:11 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:29.902 Malloc0 00:08:30.159 19:39:11 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:30.159 Delay0 00:08:30.159 19:39:11 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.417 19:39:11 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:30.675 NULL1 00:08:30.675 19:39:12 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:30.932 19:39:12 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1628364 00:08:30.932 19:39:12 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:30.932 19:39:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:30.932 19:39:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.932 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.302 Read completed with error (sct=0, sc=11) 00:08:32.302 19:39:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:32.560 19:39:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:08:32.560 19:39:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:32.817 true 00:08:32.817 19:39:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:32.817 19:39:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.381 19:39:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.947 19:39:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:08:33.947 19:39:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:33.947 true 00:08:33.947 19:39:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:33.947 19:39:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.204 19:39:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.462 19:39:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:08:34.462 19:39:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:34.719 true 00:08:34.719 19:39:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:34.719 19:39:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.976 19:39:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.234 19:39:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:08:35.234 19:39:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:35.522 true 00:08:35.522 19:39:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:35.522 19:39:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.893 19:39:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.893 19:39:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:08:36.893 19:39:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:37.151 true 00:08:37.152 19:39:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:37.152 19:39:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.409 19:39:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.666 19:39:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:08:37.666 19:39:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:37.922 true 00:08:37.922 19:39:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:37.922 19:39:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.855 19:39:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:38.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:38.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:39.112 19:39:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:08:39.112 19:39:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:39.370 true 00:08:39.370 19:39:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:39.370 19:39:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.628 19:39:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:39.887 19:39:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:08:39.887 19:39:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:40.144 true 00:08:40.144 19:39:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:40.144 19:39:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.077 19:39:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.336 19:39:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:08:41.336 19:39:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:41.594 true 00:08:41.594 19:39:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:41.594 19:39:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.852 19:39:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.109 19:39:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:08:42.109 19:39:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:42.366 true 00:08:42.366 19:39:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:42.366 19:39:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.624 19:39:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.881 19:39:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:08:42.881 19:39:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:42.881 true 00:08:43.139 19:39:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:43.139 19:39:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.071 19:39:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.329 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.329 19:39:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:08:44.329 19:39:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:44.587 true 00:08:44.587 19:39:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:44.587 19:39:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.520 19:39:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.520 19:39:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:08:45.520 19:39:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:45.778 true 00:08:45.778 19:39:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:45.778 19:39:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.035 19:39:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.292 19:39:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:08:46.293 19:39:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:46.550 true 00:08:46.550 19:39:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:46.550 19:39:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.482 19:39:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:47.740 19:39:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:08:47.740 19:39:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:47.997 true 00:08:47.997 19:39:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:47.997 19:39:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.254 19:39:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.512 19:39:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:08:48.512 19:39:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:48.770 true 00:08:48.770 19:39:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:48.770 19:39:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.750 19:39:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.006 19:39:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:08:50.006 19:39:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:50.263 true 00:08:50.263 19:39:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:50.263 19:39:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.520 19:39:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.778 19:39:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:08:50.778 19:39:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:51.036 true 00:08:51.036 19:39:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:51.036 19:39:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.293 19:39:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.293 19:39:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:08:51.293 19:39:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:51.550 true 00:08:51.550 19:39:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:51.550 19:39:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.927 19:39:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.927 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:52.927 19:39:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:08:52.927 19:39:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:53.185 true 00:08:53.185 19:39:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:53.185 19:39:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.442 19:39:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.700 19:39:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:08:53.700 19:39:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:53.958 true 00:08:53.958 19:39:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:53.958 19:39:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.891 19:39:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.891 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.149 19:39:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:08:55.149 19:39:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:55.407 true 00:08:55.407 19:39:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:55.407 19:39:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.664 19:39:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.921 19:39:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:08:55.921 19:39:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:56.179 true 00:08:56.179 19:39:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:56.179 19:39:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.112 19:39:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.369 19:39:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:08:57.369 19:39:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:57.627 true 00:08:57.627 19:39:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:57.627 19:39:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.885 19:39:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.142 19:39:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:08:58.142 19:39:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:58.400 true 00:08:58.400 19:39:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:58.400 19:39:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.332 19:39:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.589 19:39:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:08:59.589 19:39:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:59.847 true 00:08:59.847 19:39:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:08:59.847 19:39:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.104 19:39:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.362 19:39:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:09:00.362 19:39:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:00.362 true 00:09:00.620 19:39:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:09:00.620 19:39:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.187 Initializing NVMe Controllers 00:09:01.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:01.187 Controller IO queue size 128, less than required. 00:09:01.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:01.187 Controller IO queue size 128, less than required. 00:09:01.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:01.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:01.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:01.187 Initialization complete. Launching workers. 00:09:01.187 ======================================================== 00:09:01.187 Latency(us) 00:09:01.187 Device Information : IOPS MiB/s Average min max 00:09:01.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 952.93 0.47 70477.06 2232.41 1013051.82 00:09:01.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10579.07 5.17 12099.64 2944.56 455720.96 00:09:01.187 ======================================================== 00:09:01.187 Total : 11532.00 5.63 16923.59 2232.41 1013051.82 00:09:01.187 00:09:01.445 19:39:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.703 19:39:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:09:01.703 19:39:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:01.961 true 00:09:01.961 19:39:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1628364 00:09:01.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1628364) - No such process 00:09:01.961 19:39:43 -- target/ns_hotplug_stress.sh@44 -- # wait 1628364 00:09:01.961 19:39:43 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:09:01.961 19:39:43 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:09:01.961 19:39:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:01.961 19:39:43 -- nvmf/common.sh@117 -- # sync 00:09:01.961 19:39:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.961 19:39:43 -- nvmf/common.sh@120 -- # set +e 00:09:01.961 19:39:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.961 19:39:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.961 rmmod nvme_tcp 00:09:01.961 rmmod nvme_fabrics 00:09:01.961 rmmod nvme_keyring 00:09:01.961 19:39:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.961 19:39:43 -- nvmf/common.sh@124 -- # set -e 00:09:01.961 19:39:43 -- nvmf/common.sh@125 -- # return 0 00:09:01.961 19:39:43 -- nvmf/common.sh@478 -- # '[' -n 1627926 ']' 00:09:01.961 19:39:43 -- nvmf/common.sh@479 -- # killprocess 1627926 00:09:01.961 19:39:43 -- common/autotest_common.sh@936 -- # '[' -z 1627926 ']' 00:09:01.961 19:39:43 -- common/autotest_common.sh@940 -- # kill -0 1627926 00:09:01.961 19:39:43 -- common/autotest_common.sh@941 -- # uname 00:09:01.961 19:39:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:01.961 19:39:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1627926 00:09:01.961 19:39:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:01.961 19:39:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:01.961 19:39:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1627926' 00:09:01.961 killing process with pid 1627926 00:09:01.961 19:39:43 -- common/autotest_common.sh@955 -- # kill 1627926 00:09:01.961 19:39:43 -- common/autotest_common.sh@960 -- # wait 1627926 00:09:02.220 19:39:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:02.220 19:39:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:02.220 19:39:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:02.220 19:39:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.220 19:39:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.220 19:39:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.220 19:39:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.220 19:39:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.179 19:39:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:04.179 00:09:04.179 real 0m38.675s 00:09:04.179 user 2m29.513s 00:09:04.179 sys 0m10.028s 00:09:04.179 19:39:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:04.179 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:09:04.179 ************************************ 00:09:04.179 END TEST nvmf_ns_hotplug_stress 00:09:04.179 ************************************ 00:09:04.179 19:39:45 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:04.179 19:39:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:04.179 19:39:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.179 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:09:04.438 ************************************ 00:09:04.438 START TEST nvmf_connect_stress 00:09:04.438 ************************************ 00:09:04.438 19:39:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:04.438 * Looking for test storage... 00:09:04.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.438 19:39:45 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.438 19:39:45 -- nvmf/common.sh@7 -- # uname -s 00:09:04.438 19:39:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.438 19:39:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.438 19:39:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.438 19:39:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.438 19:39:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.438 19:39:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.438 19:39:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.438 19:39:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.438 19:39:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.438 19:39:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.438 19:39:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:04.438 19:39:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:04.438 19:39:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.438 19:39:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.438 19:39:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.438 19:39:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.438 19:39:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.438 19:39:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.438 19:39:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.438 19:39:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.438 19:39:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.438 19:39:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.438 19:39:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.438 19:39:45 -- paths/export.sh@5 -- # export PATH 00:09:04.438 19:39:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.438 19:39:45 -- nvmf/common.sh@47 -- # : 0 00:09:04.438 19:39:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.438 19:39:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.438 19:39:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.438 19:39:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.438 19:39:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.438 19:39:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.438 19:39:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.438 19:39:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.438 19:39:45 -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:04.438 19:39:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:04.438 19:39:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.438 19:39:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:04.438 19:39:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:04.438 19:39:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:04.438 19:39:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.438 19:39:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.438 19:39:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.438 19:39:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:04.438 19:39:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:04.438 19:39:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:04.438 19:39:45 -- common/autotest_common.sh@10 -- # set +x 00:09:06.340 19:39:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:06.340 19:39:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.340 19:39:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.340 19:39:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.340 19:39:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.340 19:39:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.340 19:39:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.340 19:39:47 -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.340 19:39:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.340 19:39:47 -- nvmf/common.sh@296 -- # e810=() 00:09:06.340 19:39:47 -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.340 19:39:47 -- nvmf/common.sh@297 -- # x722=() 00:09:06.340 19:39:47 -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.340 19:39:47 -- nvmf/common.sh@298 -- # mlx=() 00:09:06.340 19:39:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.340 19:39:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.340 19:39:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.340 19:39:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.340 19:39:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.340 19:39:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.340 19:39:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:06.340 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:06.340 19:39:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.340 19:39:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:06.340 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:06.340 19:39:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.340 19:39:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.340 19:39:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.340 19:39:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:06.340 19:39:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.340 19:39:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:06.340 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:06.340 19:39:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.340 19:39:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.340 19:39:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.340 19:39:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:06.340 19:39:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.340 19:39:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:06.340 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:06.340 19:39:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.340 19:39:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:06.340 19:39:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:06.340 19:39:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:06.340 19:39:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:06.340 19:39:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.340 19:39:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.340 19:39:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.340 19:39:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.340 19:39:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.340 19:39:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.340 19:39:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.340 19:39:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.340 19:39:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.340 19:39:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.598 19:39:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.598 19:39:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.598 19:39:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.598 19:39:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.598 19:39:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.598 19:39:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.598 19:39:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.598 19:39:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.598 19:39:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.598 19:39:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:09:06.598 00:09:06.598 --- 10.0.0.2 ping statistics --- 00:09:06.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.598 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:06.598 19:39:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:09:06.598 00:09:06.598 --- 10.0.0.1 ping statistics --- 00:09:06.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.598 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:06.598 19:39:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.598 19:39:47 -- nvmf/common.sh@411 -- # return 0 00:09:06.598 19:39:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:06.598 19:39:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.598 19:39:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:06.598 19:39:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:06.598 19:39:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.598 19:39:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:06.598 19:39:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:06.598 19:39:48 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:06.598 19:39:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:06.598 19:39:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:06.598 19:39:48 -- common/autotest_common.sh@10 -- # set +x 00:09:06.598 19:39:48 -- nvmf/common.sh@470 -- # nvmfpid=1634080 00:09:06.598 19:39:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:06.598 19:39:48 -- nvmf/common.sh@471 -- # waitforlisten 1634080 00:09:06.598 19:39:48 -- common/autotest_common.sh@817 -- # '[' -z 1634080 ']' 00:09:06.598 19:39:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.598 19:39:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:06.599 19:39:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.599 19:39:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:06.599 19:39:48 -- common/autotest_common.sh@10 -- # set +x 00:09:06.599 [2024-04-24 19:39:48.055158] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:09:06.599 [2024-04-24 19:39:48.055239] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.599 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.856 [2024-04-24 19:39:48.124785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.856 [2024-04-24 19:39:48.239296] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.856 [2024-04-24 19:39:48.239351] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.856 [2024-04-24 19:39:48.239378] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.856 [2024-04-24 19:39:48.239396] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.856 [2024-04-24 19:39:48.239407] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.856 [2024-04-24 19:39:48.239491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.856 [2024-04-24 19:39:48.239545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.856 [2024-04-24 19:39:48.239548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.789 19:39:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:07.789 19:39:49 -- common/autotest_common.sh@850 -- # return 0 00:09:07.789 19:39:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:07.789 19:39:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:07.789 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:09:07.789 19:39:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.789 19:39:49 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.789 19:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.789 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:09:07.789 [2024-04-24 19:39:49.064294] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.789 19:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.789 19:39:49 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:07.789 19:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.789 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:09:07.789 19:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.789 19:39:49 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.789 19:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.789 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:09:07.789 [2024-04-24 19:39:49.091747] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.789 19:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.789 19:39:49 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:07.789 19:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.789 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:09:07.789 NULL1 00:09:07.789 19:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.789 19:39:49 -- target/connect_stress.sh@21 -- # PERF_PID=1634234 00:09:07.789 19:39:49 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:07.789 19:39:49 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:07.789 19:39:49 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:07.789 19:39:49 -- target/connect_stress.sh@27 -- # seq 1 20 00:09:07.789 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.789 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:07.790 19:39:49 -- target/connect_stress.sh@28 -- # cat 00:09:07.790 19:39:49 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:07.790 19:39:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:07.790 19:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.790 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:09:08.047 19:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.047 19:39:49 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:08.047 19:39:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.047 19:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.047 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:09:08.304 19:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.304 19:39:49 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:08.304 19:39:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.304 19:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.304 19:39:49 -- common/autotest_common.sh@10 -- # set +x 00:09:08.869 19:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:08.869 19:39:50 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:08.869 19:39:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:08.869 19:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:08.869 19:39:50 -- common/autotest_common.sh@10 -- # set +x 00:09:09.126 19:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.126 19:39:50 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:09.126 19:39:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.126 19:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.126 19:39:50 -- common/autotest_common.sh@10 -- # set +x 00:09:09.383 19:39:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.383 19:39:50 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:09.383 19:39:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.383 19:39:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.383 19:39:50 -- common/autotest_common.sh@10 -- # set +x 00:09:09.640 19:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.640 19:39:51 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:09.640 19:39:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.640 19:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.640 19:39:51 -- common/autotest_common.sh@10 -- # set +x 00:09:09.898 19:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.898 19:39:51 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:09.898 19:39:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:09.898 19:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.898 19:39:51 -- common/autotest_common.sh@10 -- # set +x 00:09:10.463 19:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.463 19:39:51 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:10.463 19:39:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.463 19:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.463 19:39:51 -- common/autotest_common.sh@10 -- # set +x 00:09:10.721 19:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.721 19:39:52 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:10.721 19:39:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.721 19:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.721 19:39:52 -- common/autotest_common.sh@10 -- # set +x 00:09:10.978 19:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.978 19:39:52 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:10.978 19:39:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:10.978 19:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.978 19:39:52 -- common/autotest_common.sh@10 -- # set +x 00:09:11.235 19:39:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.235 19:39:52 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:11.235 19:39:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.235 19:39:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.235 19:39:52 -- common/autotest_common.sh@10 -- # set +x 00:09:11.800 19:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.800 19:39:53 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:11.800 19:39:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:11.800 19:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.800 19:39:53 -- common/autotest_common.sh@10 -- # set +x 00:09:12.057 19:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.057 19:39:53 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:12.057 19:39:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.057 19:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.057 19:39:53 -- common/autotest_common.sh@10 -- # set +x 00:09:12.314 19:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.314 19:39:53 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:12.314 19:39:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.314 19:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.314 19:39:53 -- common/autotest_common.sh@10 -- # set +x 00:09:12.572 19:39:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.572 19:39:53 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:12.572 19:39:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.572 19:39:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.572 19:39:53 -- common/autotest_common.sh@10 -- # set +x 00:09:12.830 19:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.830 19:39:54 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:12.830 19:39:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:12.830 19:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.830 19:39:54 -- common/autotest_common.sh@10 -- # set +x 00:09:13.395 19:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.395 19:39:54 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:13.395 19:39:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.395 19:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.395 19:39:54 -- common/autotest_common.sh@10 -- # set +x 00:09:13.653 19:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.653 19:39:54 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:13.653 19:39:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.653 19:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.653 19:39:54 -- common/autotest_common.sh@10 -- # set +x 00:09:13.910 19:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.910 19:39:55 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:13.910 19:39:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:13.910 19:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:13.910 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:09:14.168 19:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.168 19:39:55 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:14.168 19:39:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.168 19:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.168 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:09:14.425 19:39:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.425 19:39:55 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:14.425 19:39:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.425 19:39:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.425 19:39:55 -- common/autotest_common.sh@10 -- # set +x 00:09:14.991 19:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:14.991 19:39:56 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:14.991 19:39:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:14.991 19:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:14.991 19:39:56 -- common/autotest_common.sh@10 -- # set +x 00:09:15.249 19:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.249 19:39:56 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:15.249 19:39:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.249 19:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.249 19:39:56 -- common/autotest_common.sh@10 -- # set +x 00:09:15.506 19:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.506 19:39:56 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:15.506 19:39:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.506 19:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.506 19:39:56 -- common/autotest_common.sh@10 -- # set +x 00:09:15.764 19:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.764 19:39:57 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:15.764 19:39:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:15.764 19:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.764 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:09:16.020 19:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.020 19:39:57 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:16.020 19:39:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.020 19:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.020 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:09:16.583 19:39:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.583 19:39:57 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:16.583 19:39:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.583 19:39:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.583 19:39:57 -- common/autotest_common.sh@10 -- # set +x 00:09:16.843 19:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.843 19:39:58 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:16.843 19:39:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:16.843 19:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.843 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:09:17.131 19:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:17.131 19:39:58 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:17.131 19:39:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.131 19:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:17.131 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:09:17.394 19:39:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:17.394 19:39:58 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:17.394 19:39:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.394 19:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:17.394 19:39:58 -- common/autotest_common.sh@10 -- # set +x 00:09:17.652 19:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:17.652 19:39:59 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:17.652 19:39:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:17.652 19:39:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:17.652 19:39:59 -- common/autotest_common.sh@10 -- # set +x 00:09:17.910 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:18.169 19:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.169 19:39:59 -- target/connect_stress.sh@34 -- # kill -0 1634234 00:09:18.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1634234) - No such process 00:09:18.169 19:39:59 -- target/connect_stress.sh@38 -- # wait 1634234 00:09:18.169 19:39:59 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:18.169 19:39:59 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:18.169 19:39:59 -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:18.169 19:39:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:18.169 19:39:59 -- nvmf/common.sh@117 -- # sync 00:09:18.169 19:39:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.169 19:39:59 -- nvmf/common.sh@120 -- # set +e 00:09:18.169 19:39:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.169 19:39:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.169 rmmod nvme_tcp 00:09:18.169 rmmod nvme_fabrics 00:09:18.169 rmmod nvme_keyring 00:09:18.169 19:39:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.169 19:39:59 -- nvmf/common.sh@124 -- # set -e 00:09:18.169 19:39:59 -- nvmf/common.sh@125 -- # return 0 00:09:18.169 19:39:59 -- nvmf/common.sh@478 -- # '[' -n 1634080 ']' 00:09:18.169 19:39:59 -- nvmf/common.sh@479 -- # killprocess 1634080 00:09:18.169 19:39:59 -- common/autotest_common.sh@936 -- # '[' -z 1634080 ']' 00:09:18.169 19:39:59 -- common/autotest_common.sh@940 -- # kill -0 1634080 00:09:18.169 19:39:59 -- common/autotest_common.sh@941 -- # uname 00:09:18.169 19:39:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:18.169 19:39:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1634080 00:09:18.169 19:39:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:18.169 19:39:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:18.169 19:39:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1634080' 00:09:18.169 killing process with pid 1634080 00:09:18.169 19:39:59 -- common/autotest_common.sh@955 -- # kill 1634080 00:09:18.169 19:39:59 -- common/autotest_common.sh@960 -- # wait 1634080 00:09:18.428 19:39:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:18.428 19:39:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:18.428 19:39:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:18.428 19:39:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.428 19:39:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.428 19:39:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.428 19:39:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.428 19:39:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.332 19:40:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.332 00:09:20.332 real 0m16.051s 00:09:20.332 user 0m40.195s 00:09:20.332 sys 0m6.210s 00:09:20.332 19:40:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:20.332 19:40:01 -- common/autotest_common.sh@10 -- # set +x 00:09:20.332 ************************************ 00:09:20.332 END TEST nvmf_connect_stress 00:09:20.332 ************************************ 00:09:20.591 19:40:01 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:20.591 19:40:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:20.591 19:40:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.591 19:40:01 -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 ************************************ 00:09:20.591 START TEST nvmf_fused_ordering 00:09:20.591 ************************************ 00:09:20.591 19:40:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:20.591 * Looking for test storage... 00:09:20.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.591 19:40:02 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.591 19:40:02 -- nvmf/common.sh@7 -- # uname -s 00:09:20.591 19:40:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.591 19:40:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.591 19:40:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.591 19:40:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.591 19:40:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.591 19:40:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.591 19:40:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.591 19:40:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.591 19:40:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.591 19:40:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.591 19:40:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.591 19:40:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.591 19:40:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.591 19:40:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.591 19:40:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.591 19:40:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.591 19:40:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.591 19:40:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.591 19:40:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.591 19:40:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.591 19:40:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.591 19:40:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.591 19:40:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.591 19:40:02 -- paths/export.sh@5 -- # export PATH 00:09:20.591 19:40:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.591 19:40:02 -- nvmf/common.sh@47 -- # : 0 00:09:20.591 19:40:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.591 19:40:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.591 19:40:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.591 19:40:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.591 19:40:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.591 19:40:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.591 19:40:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.591 19:40:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.591 19:40:02 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:20.591 19:40:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:20.591 19:40:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.591 19:40:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:20.591 19:40:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:20.591 19:40:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:20.592 19:40:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.592 19:40:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.592 19:40:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.592 19:40:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:20.592 19:40:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:20.592 19:40:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.592 19:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:23.125 19:40:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:23.125 19:40:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.125 19:40:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.125 19:40:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.125 19:40:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.125 19:40:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.125 19:40:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.125 19:40:04 -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.125 19:40:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.125 19:40:04 -- nvmf/common.sh@296 -- # e810=() 00:09:23.125 19:40:04 -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.125 19:40:04 -- nvmf/common.sh@297 -- # x722=() 00:09:23.125 19:40:04 -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.125 19:40:04 -- nvmf/common.sh@298 -- # mlx=() 00:09:23.125 19:40:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.125 19:40:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.125 19:40:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.125 19:40:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.125 19:40:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.125 19:40:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.125 19:40:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:23.125 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:23.125 19:40:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.125 19:40:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:23.125 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:23.125 19:40:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.125 19:40:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.125 19:40:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.125 19:40:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:23.125 19:40:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.125 19:40:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:23.125 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:23.125 19:40:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.125 19:40:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.125 19:40:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.125 19:40:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:23.125 19:40:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.125 19:40:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:23.125 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:23.125 19:40:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.125 19:40:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:23.125 19:40:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:23.125 19:40:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:23.125 19:40:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:23.125 19:40:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.125 19:40:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.125 19:40:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.125 19:40:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.125 19:40:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.125 19:40:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.125 19:40:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.125 19:40:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.125 19:40:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.125 19:40:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.125 19:40:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.125 19:40:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.125 19:40:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.125 19:40:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.126 19:40:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.126 19:40:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.126 19:40:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.126 19:40:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.126 19:40:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.126 19:40:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:09:23.126 00:09:23.126 --- 10.0.0.2 ping statistics --- 00:09:23.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.126 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:23.126 19:40:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:09:23.126 00:09:23.126 --- 10.0.0.1 ping statistics --- 00:09:23.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.126 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:09:23.126 19:40:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.126 19:40:04 -- nvmf/common.sh@411 -- # return 0 00:09:23.126 19:40:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:23.126 19:40:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.126 19:40:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:23.126 19:40:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:23.126 19:40:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.126 19:40:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:23.126 19:40:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:23.126 19:40:04 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:23.126 19:40:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:23.126 19:40:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:23.126 19:40:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.126 19:40:04 -- nvmf/common.sh@470 -- # nvmfpid=1637402 00:09:23.126 19:40:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:23.126 19:40:04 -- nvmf/common.sh@471 -- # waitforlisten 1637402 00:09:23.126 19:40:04 -- common/autotest_common.sh@817 -- # '[' -z 1637402 ']' 00:09:23.126 19:40:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.126 19:40:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:23.126 19:40:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.126 19:40:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:23.126 19:40:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.126 [2024-04-24 19:40:04.369486] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:09:23.126 [2024-04-24 19:40:04.369588] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.126 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.126 [2024-04-24 19:40:04.439011] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.126 [2024-04-24 19:40:04.558217] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.126 [2024-04-24 19:40:04.558294] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.126 [2024-04-24 19:40:04.558311] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.126 [2024-04-24 19:40:04.558324] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.126 [2024-04-24 19:40:04.558336] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.126 [2024-04-24 19:40:04.558381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.060 19:40:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:24.060 19:40:05 -- common/autotest_common.sh@850 -- # return 0 00:09:24.060 19:40:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:24.060 19:40:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:24.060 19:40:05 -- common/autotest_common.sh@10 -- # set +x 00:09:24.060 19:40:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.060 19:40:05 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.060 19:40:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.060 19:40:05 -- common/autotest_common.sh@10 -- # set +x 00:09:24.060 [2024-04-24 19:40:05.367806] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.060 19:40:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.060 19:40:05 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.060 19:40:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.060 19:40:05 -- common/autotest_common.sh@10 -- # set +x 00:09:24.060 19:40:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.060 19:40:05 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.060 19:40:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.060 19:40:05 -- common/autotest_common.sh@10 -- # set +x 00:09:24.060 [2024-04-24 19:40:05.384017] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.060 19:40:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.060 19:40:05 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:24.060 19:40:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.060 19:40:05 -- common/autotest_common.sh@10 -- # set +x 00:09:24.060 NULL1 00:09:24.060 19:40:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.060 19:40:05 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:24.060 19:40:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.060 19:40:05 -- common/autotest_common.sh@10 -- # set +x 00:09:24.060 19:40:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.061 19:40:05 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:24.061 19:40:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:24.061 19:40:05 -- common/autotest_common.sh@10 -- # set +x 00:09:24.061 19:40:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:24.061 19:40:05 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:24.061 [2024-04-24 19:40:05.431467] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:09:24.061 [2024-04-24 19:40:05.431510] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637551 ] 00:09:24.061 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.626 Attached to nqn.2016-06.io.spdk:cnode1 00:09:24.626 Namespace ID: 1 size: 1GB 00:09:24.626 fused_ordering(0) 00:09:24.626 fused_ordering(1) 00:09:24.626 fused_ordering(2) 00:09:24.626 fused_ordering(3) 00:09:24.626 fused_ordering(4) 00:09:24.626 fused_ordering(5) 00:09:24.626 fused_ordering(6) 00:09:24.626 fused_ordering(7) 00:09:24.626 fused_ordering(8) 00:09:24.626 fused_ordering(9) 00:09:24.626 fused_ordering(10) 00:09:24.626 fused_ordering(11) 00:09:24.626 fused_ordering(12) 00:09:24.626 fused_ordering(13) 00:09:24.626 fused_ordering(14) 00:09:24.626 fused_ordering(15) 00:09:24.626 fused_ordering(16) 00:09:24.626 fused_ordering(17) 00:09:24.626 fused_ordering(18) 00:09:24.626 fused_ordering(19) 00:09:24.626 fused_ordering(20) 00:09:24.626 fused_ordering(21) 00:09:24.626 fused_ordering(22) 00:09:24.626 fused_ordering(23) 00:09:24.626 fused_ordering(24) 00:09:24.626 fused_ordering(25) 00:09:24.626 fused_ordering(26) 00:09:24.626 fused_ordering(27) 00:09:24.626 fused_ordering(28) 00:09:24.626 fused_ordering(29) 00:09:24.626 fused_ordering(30) 00:09:24.626 fused_ordering(31) 00:09:24.626 fused_ordering(32) 00:09:24.626 fused_ordering(33) 00:09:24.626 fused_ordering(34) 00:09:24.626 fused_ordering(35) 00:09:24.626 fused_ordering(36) 00:09:24.626 fused_ordering(37) 00:09:24.626 fused_ordering(38) 00:09:24.626 fused_ordering(39) 00:09:24.626 fused_ordering(40) 00:09:24.626 fused_ordering(41) 00:09:24.626 fused_ordering(42) 00:09:24.626 fused_ordering(43) 00:09:24.626 fused_ordering(44) 00:09:24.626 fused_ordering(45) 00:09:24.626 fused_ordering(46) 00:09:24.626 fused_ordering(47) 00:09:24.626 fused_ordering(48) 00:09:24.626 fused_ordering(49) 00:09:24.626 fused_ordering(50) 00:09:24.626 fused_ordering(51) 00:09:24.626 fused_ordering(52) 00:09:24.626 fused_ordering(53) 00:09:24.626 fused_ordering(54) 00:09:24.626 fused_ordering(55) 00:09:24.626 fused_ordering(56) 00:09:24.626 fused_ordering(57) 00:09:24.626 fused_ordering(58) 00:09:24.626 fused_ordering(59) 00:09:24.626 fused_ordering(60) 00:09:24.626 fused_ordering(61) 00:09:24.626 fused_ordering(62) 00:09:24.626 fused_ordering(63) 00:09:24.626 fused_ordering(64) 00:09:24.626 fused_ordering(65) 00:09:24.626 fused_ordering(66) 00:09:24.626 fused_ordering(67) 00:09:24.626 fused_ordering(68) 00:09:24.626 fused_ordering(69) 00:09:24.626 fused_ordering(70) 00:09:24.626 fused_ordering(71) 00:09:24.626 fused_ordering(72) 00:09:24.626 fused_ordering(73) 00:09:24.626 fused_ordering(74) 00:09:24.626 fused_ordering(75) 00:09:24.626 fused_ordering(76) 00:09:24.626 fused_ordering(77) 00:09:24.626 fused_ordering(78) 00:09:24.626 fused_ordering(79) 00:09:24.626 fused_ordering(80) 00:09:24.626 fused_ordering(81) 00:09:24.626 fused_ordering(82) 00:09:24.626 fused_ordering(83) 00:09:24.626 fused_ordering(84) 00:09:24.626 fused_ordering(85) 00:09:24.626 fused_ordering(86) 00:09:24.626 fused_ordering(87) 00:09:24.626 fused_ordering(88) 00:09:24.626 fused_ordering(89) 00:09:24.626 fused_ordering(90) 00:09:24.626 fused_ordering(91) 00:09:24.626 fused_ordering(92) 00:09:24.626 fused_ordering(93) 00:09:24.626 fused_ordering(94) 00:09:24.626 fused_ordering(95) 00:09:24.626 fused_ordering(96) 00:09:24.626 fused_ordering(97) 00:09:24.626 fused_ordering(98) 00:09:24.626 fused_ordering(99) 00:09:24.626 fused_ordering(100) 00:09:24.626 fused_ordering(101) 00:09:24.626 fused_ordering(102) 00:09:24.626 fused_ordering(103) 00:09:24.626 fused_ordering(104) 00:09:24.626 fused_ordering(105) 00:09:24.626 fused_ordering(106) 00:09:24.626 fused_ordering(107) 00:09:24.626 fused_ordering(108) 00:09:24.626 fused_ordering(109) 00:09:24.626 fused_ordering(110) 00:09:24.626 fused_ordering(111) 00:09:24.626 fused_ordering(112) 00:09:24.626 fused_ordering(113) 00:09:24.626 fused_ordering(114) 00:09:24.626 fused_ordering(115) 00:09:24.626 fused_ordering(116) 00:09:24.626 fused_ordering(117) 00:09:24.626 fused_ordering(118) 00:09:24.626 fused_ordering(119) 00:09:24.626 fused_ordering(120) 00:09:24.626 fused_ordering(121) 00:09:24.626 fused_ordering(122) 00:09:24.626 fused_ordering(123) 00:09:24.626 fused_ordering(124) 00:09:24.626 fused_ordering(125) 00:09:24.626 fused_ordering(126) 00:09:24.626 fused_ordering(127) 00:09:24.626 fused_ordering(128) 00:09:24.626 fused_ordering(129) 00:09:24.626 fused_ordering(130) 00:09:24.626 fused_ordering(131) 00:09:24.626 fused_ordering(132) 00:09:24.626 fused_ordering(133) 00:09:24.626 fused_ordering(134) 00:09:24.626 fused_ordering(135) 00:09:24.626 fused_ordering(136) 00:09:24.626 fused_ordering(137) 00:09:24.626 fused_ordering(138) 00:09:24.626 fused_ordering(139) 00:09:24.626 fused_ordering(140) 00:09:24.626 fused_ordering(141) 00:09:24.626 fused_ordering(142) 00:09:24.626 fused_ordering(143) 00:09:24.626 fused_ordering(144) 00:09:24.626 fused_ordering(145) 00:09:24.626 fused_ordering(146) 00:09:24.626 fused_ordering(147) 00:09:24.626 fused_ordering(148) 00:09:24.626 fused_ordering(149) 00:09:24.626 fused_ordering(150) 00:09:24.626 fused_ordering(151) 00:09:24.626 fused_ordering(152) 00:09:24.626 fused_ordering(153) 00:09:24.626 fused_ordering(154) 00:09:24.626 fused_ordering(155) 00:09:24.626 fused_ordering(156) 00:09:24.626 fused_ordering(157) 00:09:24.626 fused_ordering(158) 00:09:24.626 fused_ordering(159) 00:09:24.626 fused_ordering(160) 00:09:24.626 fused_ordering(161) 00:09:24.626 fused_ordering(162) 00:09:24.626 fused_ordering(163) 00:09:24.626 fused_ordering(164) 00:09:24.626 fused_ordering(165) 00:09:24.626 fused_ordering(166) 00:09:24.626 fused_ordering(167) 00:09:24.626 fused_ordering(168) 00:09:24.626 fused_ordering(169) 00:09:24.626 fused_ordering(170) 00:09:24.626 fused_ordering(171) 00:09:24.626 fused_ordering(172) 00:09:24.626 fused_ordering(173) 00:09:24.626 fused_ordering(174) 00:09:24.626 fused_ordering(175) 00:09:24.626 fused_ordering(176) 00:09:24.626 fused_ordering(177) 00:09:24.626 fused_ordering(178) 00:09:24.626 fused_ordering(179) 00:09:24.626 fused_ordering(180) 00:09:24.626 fused_ordering(181) 00:09:24.626 fused_ordering(182) 00:09:24.626 fused_ordering(183) 00:09:24.626 fused_ordering(184) 00:09:24.626 fused_ordering(185) 00:09:24.626 fused_ordering(186) 00:09:24.626 fused_ordering(187) 00:09:24.626 fused_ordering(188) 00:09:24.626 fused_ordering(189) 00:09:24.626 fused_ordering(190) 00:09:24.626 fused_ordering(191) 00:09:24.626 fused_ordering(192) 00:09:24.626 fused_ordering(193) 00:09:24.626 fused_ordering(194) 00:09:24.626 fused_ordering(195) 00:09:24.626 fused_ordering(196) 00:09:24.626 fused_ordering(197) 00:09:24.626 fused_ordering(198) 00:09:24.626 fused_ordering(199) 00:09:24.626 fused_ordering(200) 00:09:24.626 fused_ordering(201) 00:09:24.626 fused_ordering(202) 00:09:24.626 fused_ordering(203) 00:09:24.626 fused_ordering(204) 00:09:24.626 fused_ordering(205) 00:09:25.562 fused_ordering(206) 00:09:25.562 fused_ordering(207) 00:09:25.562 fused_ordering(208) 00:09:25.562 fused_ordering(209) 00:09:25.562 fused_ordering(210) 00:09:25.562 fused_ordering(211) 00:09:25.562 fused_ordering(212) 00:09:25.562 fused_ordering(213) 00:09:25.562 fused_ordering(214) 00:09:25.562 fused_ordering(215) 00:09:25.562 fused_ordering(216) 00:09:25.562 fused_ordering(217) 00:09:25.562 fused_ordering(218) 00:09:25.562 fused_ordering(219) 00:09:25.562 fused_ordering(220) 00:09:25.562 fused_ordering(221) 00:09:25.562 fused_ordering(222) 00:09:25.562 fused_ordering(223) 00:09:25.562 fused_ordering(224) 00:09:25.562 fused_ordering(225) 00:09:25.562 fused_ordering(226) 00:09:25.562 fused_ordering(227) 00:09:25.562 fused_ordering(228) 00:09:25.562 fused_ordering(229) 00:09:25.562 fused_ordering(230) 00:09:25.562 fused_ordering(231) 00:09:25.562 fused_ordering(232) 00:09:25.562 fused_ordering(233) 00:09:25.562 fused_ordering(234) 00:09:25.562 fused_ordering(235) 00:09:25.562 fused_ordering(236) 00:09:25.562 fused_ordering(237) 00:09:25.562 fused_ordering(238) 00:09:25.562 fused_ordering(239) 00:09:25.562 fused_ordering(240) 00:09:25.562 fused_ordering(241) 00:09:25.562 fused_ordering(242) 00:09:25.562 fused_ordering(243) 00:09:25.562 fused_ordering(244) 00:09:25.562 fused_ordering(245) 00:09:25.562 fused_ordering(246) 00:09:25.562 fused_ordering(247) 00:09:25.562 fused_ordering(248) 00:09:25.562 fused_ordering(249) 00:09:25.562 fused_ordering(250) 00:09:25.562 fused_ordering(251) 00:09:25.562 fused_ordering(252) 00:09:25.562 fused_ordering(253) 00:09:25.562 fused_ordering(254) 00:09:25.562 fused_ordering(255) 00:09:25.562 fused_ordering(256) 00:09:25.562 fused_ordering(257) 00:09:25.562 fused_ordering(258) 00:09:25.562 fused_ordering(259) 00:09:25.562 fused_ordering(260) 00:09:25.562 fused_ordering(261) 00:09:25.562 fused_ordering(262) 00:09:25.562 fused_ordering(263) 00:09:25.562 fused_ordering(264) 00:09:25.562 fused_ordering(265) 00:09:25.562 fused_ordering(266) 00:09:25.562 fused_ordering(267) 00:09:25.562 fused_ordering(268) 00:09:25.562 fused_ordering(269) 00:09:25.562 fused_ordering(270) 00:09:25.562 fused_ordering(271) 00:09:25.562 fused_ordering(272) 00:09:25.562 fused_ordering(273) 00:09:25.562 fused_ordering(274) 00:09:25.562 fused_ordering(275) 00:09:25.562 fused_ordering(276) 00:09:25.562 fused_ordering(277) 00:09:25.562 fused_ordering(278) 00:09:25.562 fused_ordering(279) 00:09:25.562 fused_ordering(280) 00:09:25.562 fused_ordering(281) 00:09:25.562 fused_ordering(282) 00:09:25.562 fused_ordering(283) 00:09:25.562 fused_ordering(284) 00:09:25.562 fused_ordering(285) 00:09:25.562 fused_ordering(286) 00:09:25.562 fused_ordering(287) 00:09:25.562 fused_ordering(288) 00:09:25.562 fused_ordering(289) 00:09:25.562 fused_ordering(290) 00:09:25.562 fused_ordering(291) 00:09:25.562 fused_ordering(292) 00:09:25.562 fused_ordering(293) 00:09:25.562 fused_ordering(294) 00:09:25.562 fused_ordering(295) 00:09:25.562 fused_ordering(296) 00:09:25.562 fused_ordering(297) 00:09:25.562 fused_ordering(298) 00:09:25.562 fused_ordering(299) 00:09:25.562 fused_ordering(300) 00:09:25.562 fused_ordering(301) 00:09:25.562 fused_ordering(302) 00:09:25.562 fused_ordering(303) 00:09:25.562 fused_ordering(304) 00:09:25.562 fused_ordering(305) 00:09:25.562 fused_ordering(306) 00:09:25.562 fused_ordering(307) 00:09:25.562 fused_ordering(308) 00:09:25.562 fused_ordering(309) 00:09:25.562 fused_ordering(310) 00:09:25.562 fused_ordering(311) 00:09:25.562 fused_ordering(312) 00:09:25.562 fused_ordering(313) 00:09:25.562 fused_ordering(314) 00:09:25.562 fused_ordering(315) 00:09:25.562 fused_ordering(316) 00:09:25.562 fused_ordering(317) 00:09:25.562 fused_ordering(318) 00:09:25.562 fused_ordering(319) 00:09:25.562 fused_ordering(320) 00:09:25.562 fused_ordering(321) 00:09:25.562 fused_ordering(322) 00:09:25.562 fused_ordering(323) 00:09:25.562 fused_ordering(324) 00:09:25.562 fused_ordering(325) 00:09:25.562 fused_ordering(326) 00:09:25.562 fused_ordering(327) 00:09:25.562 fused_ordering(328) 00:09:25.562 fused_ordering(329) 00:09:25.562 fused_ordering(330) 00:09:25.562 fused_ordering(331) 00:09:25.562 fused_ordering(332) 00:09:25.562 fused_ordering(333) 00:09:25.562 fused_ordering(334) 00:09:25.562 fused_ordering(335) 00:09:25.562 fused_ordering(336) 00:09:25.562 fused_ordering(337) 00:09:25.562 fused_ordering(338) 00:09:25.562 fused_ordering(339) 00:09:25.562 fused_ordering(340) 00:09:25.562 fused_ordering(341) 00:09:25.562 fused_ordering(342) 00:09:25.562 fused_ordering(343) 00:09:25.562 fused_ordering(344) 00:09:25.562 fused_ordering(345) 00:09:25.562 fused_ordering(346) 00:09:25.562 fused_ordering(347) 00:09:25.562 fused_ordering(348) 00:09:25.562 fused_ordering(349) 00:09:25.562 fused_ordering(350) 00:09:25.562 fused_ordering(351) 00:09:25.562 fused_ordering(352) 00:09:25.563 fused_ordering(353) 00:09:25.563 fused_ordering(354) 00:09:25.563 fused_ordering(355) 00:09:25.563 fused_ordering(356) 00:09:25.563 fused_ordering(357) 00:09:25.563 fused_ordering(358) 00:09:25.563 fused_ordering(359) 00:09:25.563 fused_ordering(360) 00:09:25.563 fused_ordering(361) 00:09:25.563 fused_ordering(362) 00:09:25.563 fused_ordering(363) 00:09:25.563 fused_ordering(364) 00:09:25.563 fused_ordering(365) 00:09:25.563 fused_ordering(366) 00:09:25.563 fused_ordering(367) 00:09:25.563 fused_ordering(368) 00:09:25.563 fused_ordering(369) 00:09:25.563 fused_ordering(370) 00:09:25.563 fused_ordering(371) 00:09:25.563 fused_ordering(372) 00:09:25.563 fused_ordering(373) 00:09:25.563 fused_ordering(374) 00:09:25.563 fused_ordering(375) 00:09:25.563 fused_ordering(376) 00:09:25.563 fused_ordering(377) 00:09:25.563 fused_ordering(378) 00:09:25.563 fused_ordering(379) 00:09:25.563 fused_ordering(380) 00:09:25.563 fused_ordering(381) 00:09:25.563 fused_ordering(382) 00:09:25.563 fused_ordering(383) 00:09:25.563 fused_ordering(384) 00:09:25.563 fused_ordering(385) 00:09:25.563 fused_ordering(386) 00:09:25.563 fused_ordering(387) 00:09:25.563 fused_ordering(388) 00:09:25.563 fused_ordering(389) 00:09:25.563 fused_ordering(390) 00:09:25.563 fused_ordering(391) 00:09:25.563 fused_ordering(392) 00:09:25.563 fused_ordering(393) 00:09:25.563 fused_ordering(394) 00:09:25.563 fused_ordering(395) 00:09:25.563 fused_ordering(396) 00:09:25.563 fused_ordering(397) 00:09:25.563 fused_ordering(398) 00:09:25.563 fused_ordering(399) 00:09:25.563 fused_ordering(400) 00:09:25.563 fused_ordering(401) 00:09:25.563 fused_ordering(402) 00:09:25.563 fused_ordering(403) 00:09:25.563 fused_ordering(404) 00:09:25.563 fused_ordering(405) 00:09:25.563 fused_ordering(406) 00:09:25.563 fused_ordering(407) 00:09:25.563 fused_ordering(408) 00:09:25.563 fused_ordering(409) 00:09:25.563 fused_ordering(410) 00:09:26.131 fused_ordering(411) 00:09:26.131 fused_ordering(412) 00:09:26.131 fused_ordering(413) 00:09:26.131 fused_ordering(414) 00:09:26.131 fused_ordering(415) 00:09:26.131 fused_ordering(416) 00:09:26.131 fused_ordering(417) 00:09:26.131 fused_ordering(418) 00:09:26.131 fused_ordering(419) 00:09:26.131 fused_ordering(420) 00:09:26.131 fused_ordering(421) 00:09:26.131 fused_ordering(422) 00:09:26.131 fused_ordering(423) 00:09:26.131 fused_ordering(424) 00:09:26.131 fused_ordering(425) 00:09:26.131 fused_ordering(426) 00:09:26.131 fused_ordering(427) 00:09:26.131 fused_ordering(428) 00:09:26.131 fused_ordering(429) 00:09:26.131 fused_ordering(430) 00:09:26.131 fused_ordering(431) 00:09:26.131 fused_ordering(432) 00:09:26.131 fused_ordering(433) 00:09:26.131 fused_ordering(434) 00:09:26.131 fused_ordering(435) 00:09:26.131 fused_ordering(436) 00:09:26.131 fused_ordering(437) 00:09:26.131 fused_ordering(438) 00:09:26.131 fused_ordering(439) 00:09:26.131 fused_ordering(440) 00:09:26.131 fused_ordering(441) 00:09:26.131 fused_ordering(442) 00:09:26.131 fused_ordering(443) 00:09:26.131 fused_ordering(444) 00:09:26.131 fused_ordering(445) 00:09:26.131 fused_ordering(446) 00:09:26.131 fused_ordering(447) 00:09:26.131 fused_ordering(448) 00:09:26.131 fused_ordering(449) 00:09:26.131 fused_ordering(450) 00:09:26.131 fused_ordering(451) 00:09:26.131 fused_ordering(452) 00:09:26.131 fused_ordering(453) 00:09:26.131 fused_ordering(454) 00:09:26.131 fused_ordering(455) 00:09:26.131 fused_ordering(456) 00:09:26.131 fused_ordering(457) 00:09:26.131 fused_ordering(458) 00:09:26.131 fused_ordering(459) 00:09:26.131 fused_ordering(460) 00:09:26.131 fused_ordering(461) 00:09:26.131 fused_ordering(462) 00:09:26.131 fused_ordering(463) 00:09:26.131 fused_ordering(464) 00:09:26.131 fused_ordering(465) 00:09:26.131 fused_ordering(466) 00:09:26.131 fused_ordering(467) 00:09:26.131 fused_ordering(468) 00:09:26.131 fused_ordering(469) 00:09:26.131 fused_ordering(470) 00:09:26.131 fused_ordering(471) 00:09:26.131 fused_ordering(472) 00:09:26.131 fused_ordering(473) 00:09:26.131 fused_ordering(474) 00:09:26.131 fused_ordering(475) 00:09:26.131 fused_ordering(476) 00:09:26.131 fused_ordering(477) 00:09:26.131 fused_ordering(478) 00:09:26.131 fused_ordering(479) 00:09:26.131 fused_ordering(480) 00:09:26.131 fused_ordering(481) 00:09:26.131 fused_ordering(482) 00:09:26.131 fused_ordering(483) 00:09:26.131 fused_ordering(484) 00:09:26.131 fused_ordering(485) 00:09:26.131 fused_ordering(486) 00:09:26.131 fused_ordering(487) 00:09:26.131 fused_ordering(488) 00:09:26.131 fused_ordering(489) 00:09:26.131 fused_ordering(490) 00:09:26.131 fused_ordering(491) 00:09:26.131 fused_ordering(492) 00:09:26.131 fused_ordering(493) 00:09:26.131 fused_ordering(494) 00:09:26.131 fused_ordering(495) 00:09:26.131 fused_ordering(496) 00:09:26.131 fused_ordering(497) 00:09:26.131 fused_ordering(498) 00:09:26.131 fused_ordering(499) 00:09:26.131 fused_ordering(500) 00:09:26.131 fused_ordering(501) 00:09:26.131 fused_ordering(502) 00:09:26.131 fused_ordering(503) 00:09:26.131 fused_ordering(504) 00:09:26.131 fused_ordering(505) 00:09:26.131 fused_ordering(506) 00:09:26.131 fused_ordering(507) 00:09:26.131 fused_ordering(508) 00:09:26.131 fused_ordering(509) 00:09:26.131 fused_ordering(510) 00:09:26.131 fused_ordering(511) 00:09:26.131 fused_ordering(512) 00:09:26.131 fused_ordering(513) 00:09:26.131 fused_ordering(514) 00:09:26.131 fused_ordering(515) 00:09:26.131 fused_ordering(516) 00:09:26.131 fused_ordering(517) 00:09:26.131 fused_ordering(518) 00:09:26.131 fused_ordering(519) 00:09:26.131 fused_ordering(520) 00:09:26.131 fused_ordering(521) 00:09:26.131 fused_ordering(522) 00:09:26.131 fused_ordering(523) 00:09:26.131 fused_ordering(524) 00:09:26.131 fused_ordering(525) 00:09:26.131 fused_ordering(526) 00:09:26.131 fused_ordering(527) 00:09:26.131 fused_ordering(528) 00:09:26.131 fused_ordering(529) 00:09:26.131 fused_ordering(530) 00:09:26.131 fused_ordering(531) 00:09:26.131 fused_ordering(532) 00:09:26.131 fused_ordering(533) 00:09:26.131 fused_ordering(534) 00:09:26.131 fused_ordering(535) 00:09:26.131 fused_ordering(536) 00:09:26.131 fused_ordering(537) 00:09:26.131 fused_ordering(538) 00:09:26.131 fused_ordering(539) 00:09:26.131 fused_ordering(540) 00:09:26.131 fused_ordering(541) 00:09:26.131 fused_ordering(542) 00:09:26.131 fused_ordering(543) 00:09:26.131 fused_ordering(544) 00:09:26.131 fused_ordering(545) 00:09:26.131 fused_ordering(546) 00:09:26.131 fused_ordering(547) 00:09:26.131 fused_ordering(548) 00:09:26.131 fused_ordering(549) 00:09:26.131 fused_ordering(550) 00:09:26.131 fused_ordering(551) 00:09:26.131 fused_ordering(552) 00:09:26.131 fused_ordering(553) 00:09:26.131 fused_ordering(554) 00:09:26.131 fused_ordering(555) 00:09:26.131 fused_ordering(556) 00:09:26.131 fused_ordering(557) 00:09:26.131 fused_ordering(558) 00:09:26.131 fused_ordering(559) 00:09:26.131 fused_ordering(560) 00:09:26.131 fused_ordering(561) 00:09:26.131 fused_ordering(562) 00:09:26.131 fused_ordering(563) 00:09:26.131 fused_ordering(564) 00:09:26.131 fused_ordering(565) 00:09:26.131 fused_ordering(566) 00:09:26.131 fused_ordering(567) 00:09:26.131 fused_ordering(568) 00:09:26.131 fused_ordering(569) 00:09:26.131 fused_ordering(570) 00:09:26.131 fused_ordering(571) 00:09:26.131 fused_ordering(572) 00:09:26.131 fused_ordering(573) 00:09:26.131 fused_ordering(574) 00:09:26.131 fused_ordering(575) 00:09:26.131 fused_ordering(576) 00:09:26.131 fused_ordering(577) 00:09:26.131 fused_ordering(578) 00:09:26.131 fused_ordering(579) 00:09:26.131 fused_ordering(580) 00:09:26.131 fused_ordering(581) 00:09:26.131 fused_ordering(582) 00:09:26.131 fused_ordering(583) 00:09:26.131 fused_ordering(584) 00:09:26.131 fused_ordering(585) 00:09:26.132 fused_ordering(586) 00:09:26.132 fused_ordering(587) 00:09:26.132 fused_ordering(588) 00:09:26.132 fused_ordering(589) 00:09:26.132 fused_ordering(590) 00:09:26.132 fused_ordering(591) 00:09:26.132 fused_ordering(592) 00:09:26.132 fused_ordering(593) 00:09:26.132 fused_ordering(594) 00:09:26.132 fused_ordering(595) 00:09:26.132 fused_ordering(596) 00:09:26.132 fused_ordering(597) 00:09:26.132 fused_ordering(598) 00:09:26.132 fused_ordering(599) 00:09:26.132 fused_ordering(600) 00:09:26.132 fused_ordering(601) 00:09:26.132 fused_ordering(602) 00:09:26.132 fused_ordering(603) 00:09:26.132 fused_ordering(604) 00:09:26.132 fused_ordering(605) 00:09:26.132 fused_ordering(606) 00:09:26.132 fused_ordering(607) 00:09:26.132 fused_ordering(608) 00:09:26.132 fused_ordering(609) 00:09:26.132 fused_ordering(610) 00:09:26.132 fused_ordering(611) 00:09:26.132 fused_ordering(612) 00:09:26.132 fused_ordering(613) 00:09:26.132 fused_ordering(614) 00:09:26.132 fused_ordering(615) 00:09:27.068 fused_ordering(616) 00:09:27.068 fused_ordering(617) 00:09:27.068 fused_ordering(618) 00:09:27.068 fused_ordering(619) 00:09:27.068 fused_ordering(620) 00:09:27.068 fused_ordering(621) 00:09:27.068 fused_ordering(622) 00:09:27.068 fused_ordering(623) 00:09:27.068 fused_ordering(624) 00:09:27.068 fused_ordering(625) 00:09:27.068 fused_ordering(626) 00:09:27.068 fused_ordering(627) 00:09:27.068 fused_ordering(628) 00:09:27.068 fused_ordering(629) 00:09:27.068 fused_ordering(630) 00:09:27.068 fused_ordering(631) 00:09:27.068 fused_ordering(632) 00:09:27.068 fused_ordering(633) 00:09:27.068 fused_ordering(634) 00:09:27.068 fused_ordering(635) 00:09:27.068 fused_ordering(636) 00:09:27.068 fused_ordering(637) 00:09:27.068 fused_ordering(638) 00:09:27.068 fused_ordering(639) 00:09:27.068 fused_ordering(640) 00:09:27.068 fused_ordering(641) 00:09:27.068 fused_ordering(642) 00:09:27.068 fused_ordering(643) 00:09:27.068 fused_ordering(644) 00:09:27.068 fused_ordering(645) 00:09:27.068 fused_ordering(646) 00:09:27.068 fused_ordering(647) 00:09:27.068 fused_ordering(648) 00:09:27.068 fused_ordering(649) 00:09:27.068 fused_ordering(650) 00:09:27.068 fused_ordering(651) 00:09:27.068 fused_ordering(652) 00:09:27.068 fused_ordering(653) 00:09:27.068 fused_ordering(654) 00:09:27.068 fused_ordering(655) 00:09:27.068 fused_ordering(656) 00:09:27.068 fused_ordering(657) 00:09:27.068 fused_ordering(658) 00:09:27.068 fused_ordering(659) 00:09:27.068 fused_ordering(660) 00:09:27.068 fused_ordering(661) 00:09:27.068 fused_ordering(662) 00:09:27.068 fused_ordering(663) 00:09:27.068 fused_ordering(664) 00:09:27.068 fused_ordering(665) 00:09:27.068 fused_ordering(666) 00:09:27.068 fused_ordering(667) 00:09:27.068 fused_ordering(668) 00:09:27.068 fused_ordering(669) 00:09:27.068 fused_ordering(670) 00:09:27.068 fused_ordering(671) 00:09:27.068 fused_ordering(672) 00:09:27.068 fused_ordering(673) 00:09:27.068 fused_ordering(674) 00:09:27.068 fused_ordering(675) 00:09:27.068 fused_ordering(676) 00:09:27.068 fused_ordering(677) 00:09:27.068 fused_ordering(678) 00:09:27.068 fused_ordering(679) 00:09:27.068 fused_ordering(680) 00:09:27.068 fused_ordering(681) 00:09:27.068 fused_ordering(682) 00:09:27.068 fused_ordering(683) 00:09:27.068 fused_ordering(684) 00:09:27.068 fused_ordering(685) 00:09:27.068 fused_ordering(686) 00:09:27.068 fused_ordering(687) 00:09:27.068 fused_ordering(688) 00:09:27.068 fused_ordering(689) 00:09:27.068 fused_ordering(690) 00:09:27.068 fused_ordering(691) 00:09:27.068 fused_ordering(692) 00:09:27.068 fused_ordering(693) 00:09:27.068 fused_ordering(694) 00:09:27.068 fused_ordering(695) 00:09:27.068 fused_ordering(696) 00:09:27.068 fused_ordering(697) 00:09:27.068 fused_ordering(698) 00:09:27.068 fused_ordering(699) 00:09:27.068 fused_ordering(700) 00:09:27.068 fused_ordering(701) 00:09:27.068 fused_ordering(702) 00:09:27.068 fused_ordering(703) 00:09:27.068 fused_ordering(704) 00:09:27.068 fused_ordering(705) 00:09:27.068 fused_ordering(706) 00:09:27.068 fused_ordering(707) 00:09:27.068 fused_ordering(708) 00:09:27.068 fused_ordering(709) 00:09:27.068 fused_ordering(710) 00:09:27.068 fused_ordering(711) 00:09:27.068 fused_ordering(712) 00:09:27.068 fused_ordering(713) 00:09:27.068 fused_ordering(714) 00:09:27.068 fused_ordering(715) 00:09:27.068 fused_ordering(716) 00:09:27.068 fused_ordering(717) 00:09:27.068 fused_ordering(718) 00:09:27.068 fused_ordering(719) 00:09:27.068 fused_ordering(720) 00:09:27.068 fused_ordering(721) 00:09:27.068 fused_ordering(722) 00:09:27.068 fused_ordering(723) 00:09:27.068 fused_ordering(724) 00:09:27.068 fused_ordering(725) 00:09:27.068 fused_ordering(726) 00:09:27.068 fused_ordering(727) 00:09:27.068 fused_ordering(728) 00:09:27.068 fused_ordering(729) 00:09:27.068 fused_ordering(730) 00:09:27.068 fused_ordering(731) 00:09:27.068 fused_ordering(732) 00:09:27.068 fused_ordering(733) 00:09:27.068 fused_ordering(734) 00:09:27.068 fused_ordering(735) 00:09:27.068 fused_ordering(736) 00:09:27.068 fused_ordering(737) 00:09:27.068 fused_ordering(738) 00:09:27.068 fused_ordering(739) 00:09:27.068 fused_ordering(740) 00:09:27.068 fused_ordering(741) 00:09:27.068 fused_ordering(742) 00:09:27.068 fused_ordering(743) 00:09:27.068 fused_ordering(744) 00:09:27.068 fused_ordering(745) 00:09:27.068 fused_ordering(746) 00:09:27.068 fused_ordering(747) 00:09:27.068 fused_ordering(748) 00:09:27.068 fused_ordering(749) 00:09:27.068 fused_ordering(750) 00:09:27.068 fused_ordering(751) 00:09:27.068 fused_ordering(752) 00:09:27.068 fused_ordering(753) 00:09:27.068 fused_ordering(754) 00:09:27.068 fused_ordering(755) 00:09:27.068 fused_ordering(756) 00:09:27.068 fused_ordering(757) 00:09:27.068 fused_ordering(758) 00:09:27.068 fused_ordering(759) 00:09:27.068 fused_ordering(760) 00:09:27.068 fused_ordering(761) 00:09:27.068 fused_ordering(762) 00:09:27.068 fused_ordering(763) 00:09:27.068 fused_ordering(764) 00:09:27.068 fused_ordering(765) 00:09:27.068 fused_ordering(766) 00:09:27.068 fused_ordering(767) 00:09:27.068 fused_ordering(768) 00:09:27.068 fused_ordering(769) 00:09:27.068 fused_ordering(770) 00:09:27.068 fused_ordering(771) 00:09:27.068 fused_ordering(772) 00:09:27.068 fused_ordering(773) 00:09:27.068 fused_ordering(774) 00:09:27.068 fused_ordering(775) 00:09:27.068 fused_ordering(776) 00:09:27.068 fused_ordering(777) 00:09:27.068 fused_ordering(778) 00:09:27.068 fused_ordering(779) 00:09:27.068 fused_ordering(780) 00:09:27.068 fused_ordering(781) 00:09:27.068 fused_ordering(782) 00:09:27.068 fused_ordering(783) 00:09:27.068 fused_ordering(784) 00:09:27.068 fused_ordering(785) 00:09:27.068 fused_ordering(786) 00:09:27.068 fused_ordering(787) 00:09:27.068 fused_ordering(788) 00:09:27.068 fused_ordering(789) 00:09:27.068 fused_ordering(790) 00:09:27.068 fused_ordering(791) 00:09:27.068 fused_ordering(792) 00:09:27.068 fused_ordering(793) 00:09:27.068 fused_ordering(794) 00:09:27.068 fused_ordering(795) 00:09:27.068 fused_ordering(796) 00:09:27.068 fused_ordering(797) 00:09:27.068 fused_ordering(798) 00:09:27.068 fused_ordering(799) 00:09:27.068 fused_ordering(800) 00:09:27.068 fused_ordering(801) 00:09:27.068 fused_ordering(802) 00:09:27.068 fused_ordering(803) 00:09:27.068 fused_ordering(804) 00:09:27.068 fused_ordering(805) 00:09:27.068 fused_ordering(806) 00:09:27.068 fused_ordering(807) 00:09:27.068 fused_ordering(808) 00:09:27.068 fused_ordering(809) 00:09:27.068 fused_ordering(810) 00:09:27.068 fused_ordering(811) 00:09:27.068 fused_ordering(812) 00:09:27.068 fused_ordering(813) 00:09:27.068 fused_ordering(814) 00:09:27.068 fused_ordering(815) 00:09:27.068 fused_ordering(816) 00:09:27.068 fused_ordering(817) 00:09:27.068 fused_ordering(818) 00:09:27.068 fused_ordering(819) 00:09:27.068 fused_ordering(820) 00:09:27.639 fused_ordering(821) 00:09:27.639 fused_ordering(822) 00:09:27.639 fused_ordering(823) 00:09:27.639 fused_ordering(824) 00:09:27.639 fused_ordering(825) 00:09:27.639 fused_ordering(826) 00:09:27.639 fused_ordering(827) 00:09:27.639 fused_ordering(828) 00:09:27.639 fused_ordering(829) 00:09:27.639 fused_ordering(830) 00:09:27.639 fused_ordering(831) 00:09:27.639 fused_ordering(832) 00:09:27.639 fused_ordering(833) 00:09:27.639 fused_ordering(834) 00:09:27.639 fused_ordering(835) 00:09:27.639 fused_ordering(836) 00:09:27.639 fused_ordering(837) 00:09:27.639 fused_ordering(838) 00:09:27.639 fused_ordering(839) 00:09:27.639 fused_ordering(840) 00:09:27.639 fused_ordering(841) 00:09:27.639 fused_ordering(842) 00:09:27.639 fused_ordering(843) 00:09:27.639 fused_ordering(844) 00:09:27.639 fused_ordering(845) 00:09:27.639 fused_ordering(846) 00:09:27.639 fused_ordering(847) 00:09:27.639 fused_ordering(848) 00:09:27.639 fused_ordering(849) 00:09:27.639 fused_ordering(850) 00:09:27.639 fused_ordering(851) 00:09:27.639 fused_ordering(852) 00:09:27.639 fused_ordering(853) 00:09:27.639 fused_ordering(854) 00:09:27.639 fused_ordering(855) 00:09:27.639 fused_ordering(856) 00:09:27.639 fused_ordering(857) 00:09:27.639 fused_ordering(858) 00:09:27.639 fused_ordering(859) 00:09:27.639 fused_ordering(860) 00:09:27.639 fused_ordering(861) 00:09:27.639 fused_ordering(862) 00:09:27.639 fused_ordering(863) 00:09:27.639 fused_ordering(864) 00:09:27.639 fused_ordering(865) 00:09:27.639 fused_ordering(866) 00:09:27.639 fused_ordering(867) 00:09:27.639 fused_ordering(868) 00:09:27.639 fused_ordering(869) 00:09:27.639 fused_ordering(870) 00:09:27.639 fused_ordering(871) 00:09:27.639 fused_ordering(872) 00:09:27.639 fused_ordering(873) 00:09:27.639 fused_ordering(874) 00:09:27.639 fused_ordering(875) 00:09:27.639 fused_ordering(876) 00:09:27.639 fused_ordering(877) 00:09:27.639 fused_ordering(878) 00:09:27.639 fused_ordering(879) 00:09:27.639 fused_ordering(880) 00:09:27.639 fused_ordering(881) 00:09:27.639 fused_ordering(882) 00:09:27.639 fused_ordering(883) 00:09:27.639 fused_ordering(884) 00:09:27.639 fused_ordering(885) 00:09:27.639 fused_ordering(886) 00:09:27.639 fused_ordering(887) 00:09:27.639 fused_ordering(888) 00:09:27.639 fused_ordering(889) 00:09:27.639 fused_ordering(890) 00:09:27.639 fused_ordering(891) 00:09:27.639 fused_ordering(892) 00:09:27.640 fused_ordering(893) 00:09:27.640 fused_ordering(894) 00:09:27.640 fused_ordering(895) 00:09:27.640 fused_ordering(896) 00:09:27.640 fused_ordering(897) 00:09:27.640 fused_ordering(898) 00:09:27.640 fused_ordering(899) 00:09:27.640 fused_ordering(900) 00:09:27.640 fused_ordering(901) 00:09:27.640 fused_ordering(902) 00:09:27.640 fused_ordering(903) 00:09:27.640 fused_ordering(904) 00:09:27.640 fused_ordering(905) 00:09:27.640 fused_ordering(906) 00:09:27.640 fused_ordering(907) 00:09:27.640 fused_ordering(908) 00:09:27.640 fused_ordering(909) 00:09:27.640 fused_ordering(910) 00:09:27.640 fused_ordering(911) 00:09:27.640 fused_ordering(912) 00:09:27.640 fused_ordering(913) 00:09:27.640 fused_ordering(914) 00:09:27.640 fused_ordering(915) 00:09:27.640 fused_ordering(916) 00:09:27.640 fused_ordering(917) 00:09:27.640 fused_ordering(918) 00:09:27.640 fused_ordering(919) 00:09:27.640 fused_ordering(920) 00:09:27.640 fused_ordering(921) 00:09:27.640 fused_ordering(922) 00:09:27.640 fused_ordering(923) 00:09:27.640 fused_ordering(924) 00:09:27.640 fused_ordering(925) 00:09:27.640 fused_ordering(926) 00:09:27.640 fused_ordering(927) 00:09:27.640 fused_ordering(928) 00:09:27.640 fused_ordering(929) 00:09:27.640 fused_ordering(930) 00:09:27.640 fused_ordering(931) 00:09:27.640 fused_ordering(932) 00:09:27.640 fused_ordering(933) 00:09:27.640 fused_ordering(934) 00:09:27.640 fused_ordering(935) 00:09:27.640 fused_ordering(936) 00:09:27.640 fused_ordering(937) 00:09:27.640 fused_ordering(938) 00:09:27.640 fused_ordering(939) 00:09:27.640 fused_ordering(940) 00:09:27.640 fused_ordering(941) 00:09:27.640 fused_ordering(942) 00:09:27.640 fused_ordering(943) 00:09:27.640 fused_ordering(944) 00:09:27.640 fused_ordering(945) 00:09:27.640 fused_ordering(946) 00:09:27.640 fused_ordering(947) 00:09:27.640 fused_ordering(948) 00:09:27.640 fused_ordering(949) 00:09:27.640 fused_ordering(950) 00:09:27.640 fused_ordering(951) 00:09:27.640 fused_ordering(952) 00:09:27.640 fused_ordering(953) 00:09:27.640 fused_ordering(954) 00:09:27.640 fused_ordering(955) 00:09:27.640 fused_ordering(956) 00:09:27.640 fused_ordering(957) 00:09:27.640 fused_ordering(958) 00:09:27.640 fused_ordering(959) 00:09:27.640 fused_ordering(960) 00:09:27.640 fused_ordering(961) 00:09:27.640 fused_ordering(962) 00:09:27.640 fused_ordering(963) 00:09:27.640 fused_ordering(964) 00:09:27.640 fused_ordering(965) 00:09:27.640 fused_ordering(966) 00:09:27.640 fused_ordering(967) 00:09:27.640 fused_ordering(968) 00:09:27.640 fused_ordering(969) 00:09:27.640 fused_ordering(970) 00:09:27.640 fused_ordering(971) 00:09:27.640 fused_ordering(972) 00:09:27.640 fused_ordering(973) 00:09:27.640 fused_ordering(974) 00:09:27.640 fused_ordering(975) 00:09:27.640 fused_ordering(976) 00:09:27.640 fused_ordering(977) 00:09:27.640 fused_ordering(978) 00:09:27.640 fused_ordering(979) 00:09:27.640 fused_ordering(980) 00:09:27.640 fused_ordering(981) 00:09:27.640 fused_ordering(982) 00:09:27.640 fused_ordering(983) 00:09:27.640 fused_ordering(984) 00:09:27.640 fused_ordering(985) 00:09:27.640 fused_ordering(986) 00:09:27.640 fused_ordering(987) 00:09:27.640 fused_ordering(988) 00:09:27.640 fused_ordering(989) 00:09:27.640 fused_ordering(990) 00:09:27.640 fused_ordering(991) 00:09:27.640 fused_ordering(992) 00:09:27.640 fused_ordering(993) 00:09:27.640 fused_ordering(994) 00:09:27.640 fused_ordering(995) 00:09:27.640 fused_ordering(996) 00:09:27.640 fused_ordering(997) 00:09:27.640 fused_ordering(998) 00:09:27.640 fused_ordering(999) 00:09:27.640 fused_ordering(1000) 00:09:27.640 fused_ordering(1001) 00:09:27.640 fused_ordering(1002) 00:09:27.640 fused_ordering(1003) 00:09:27.640 fused_ordering(1004) 00:09:27.640 fused_ordering(1005) 00:09:27.640 fused_ordering(1006) 00:09:27.640 fused_ordering(1007) 00:09:27.640 fused_ordering(1008) 00:09:27.640 fused_ordering(1009) 00:09:27.640 fused_ordering(1010) 00:09:27.640 fused_ordering(1011) 00:09:27.640 fused_ordering(1012) 00:09:27.640 fused_ordering(1013) 00:09:27.640 fused_ordering(1014) 00:09:27.640 fused_ordering(1015) 00:09:27.640 fused_ordering(1016) 00:09:27.640 fused_ordering(1017) 00:09:27.640 fused_ordering(1018) 00:09:27.640 fused_ordering(1019) 00:09:27.640 fused_ordering(1020) 00:09:27.640 fused_ordering(1021) 00:09:27.640 fused_ordering(1022) 00:09:27.640 fused_ordering(1023) 00:09:27.899 19:40:09 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:27.899 19:40:09 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:27.899 19:40:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:27.899 19:40:09 -- nvmf/common.sh@117 -- # sync 00:09:27.899 19:40:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.899 19:40:09 -- nvmf/common.sh@120 -- # set +e 00:09:27.899 19:40:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.899 19:40:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.899 rmmod nvme_tcp 00:09:27.899 rmmod nvme_fabrics 00:09:27.899 rmmod nvme_keyring 00:09:27.899 19:40:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.899 19:40:09 -- nvmf/common.sh@124 -- # set -e 00:09:27.899 19:40:09 -- nvmf/common.sh@125 -- # return 0 00:09:27.899 19:40:09 -- nvmf/common.sh@478 -- # '[' -n 1637402 ']' 00:09:27.899 19:40:09 -- nvmf/common.sh@479 -- # killprocess 1637402 00:09:27.899 19:40:09 -- common/autotest_common.sh@936 -- # '[' -z 1637402 ']' 00:09:27.899 19:40:09 -- common/autotest_common.sh@940 -- # kill -0 1637402 00:09:27.899 19:40:09 -- common/autotest_common.sh@941 -- # uname 00:09:27.899 19:40:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:27.899 19:40:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1637402 00:09:27.899 19:40:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:27.899 19:40:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:27.899 19:40:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1637402' 00:09:27.899 killing process with pid 1637402 00:09:27.899 19:40:09 -- common/autotest_common.sh@955 -- # kill 1637402 00:09:27.899 19:40:09 -- common/autotest_common.sh@960 -- # wait 1637402 00:09:28.159 19:40:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:28.159 19:40:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:28.159 19:40:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:28.159 19:40:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.159 19:40:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.159 19:40:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.159 19:40:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:28.159 19:40:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.068 19:40:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:30.068 00:09:30.068 real 0m9.572s 00:09:30.068 user 0m7.345s 00:09:30.068 sys 0m4.407s 00:09:30.068 19:40:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:30.068 19:40:11 -- common/autotest_common.sh@10 -- # set +x 00:09:30.068 ************************************ 00:09:30.068 END TEST nvmf_fused_ordering 00:09:30.068 ************************************ 00:09:30.068 19:40:11 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:30.068 19:40:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:30.068 19:40:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.068 19:40:11 -- common/autotest_common.sh@10 -- # set +x 00:09:30.328 ************************************ 00:09:30.328 START TEST nvmf_delete_subsystem 00:09:30.328 ************************************ 00:09:30.328 19:40:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:30.328 * Looking for test storage... 00:09:30.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.328 19:40:11 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.328 19:40:11 -- nvmf/common.sh@7 -- # uname -s 00:09:30.328 19:40:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.328 19:40:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.328 19:40:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.328 19:40:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.328 19:40:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.328 19:40:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.328 19:40:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.328 19:40:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.328 19:40:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.328 19:40:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.328 19:40:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.328 19:40:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:30.328 19:40:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.328 19:40:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.328 19:40:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.328 19:40:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.328 19:40:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.328 19:40:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.328 19:40:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.328 19:40:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.328 19:40:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.328 19:40:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.328 19:40:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.328 19:40:11 -- paths/export.sh@5 -- # export PATH 00:09:30.328 19:40:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.328 19:40:11 -- nvmf/common.sh@47 -- # : 0 00:09:30.328 19:40:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.328 19:40:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.328 19:40:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.328 19:40:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.328 19:40:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.328 19:40:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.328 19:40:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.328 19:40:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.328 19:40:11 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:30.328 19:40:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:30.328 19:40:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.328 19:40:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:30.328 19:40:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:30.328 19:40:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:30.328 19:40:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.328 19:40:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.328 19:40:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.328 19:40:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:30.328 19:40:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:30.328 19:40:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.328 19:40:11 -- common/autotest_common.sh@10 -- # set +x 00:09:32.234 19:40:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:32.234 19:40:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:32.234 19:40:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:32.234 19:40:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:32.234 19:40:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:32.234 19:40:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:32.234 19:40:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:32.234 19:40:13 -- nvmf/common.sh@295 -- # net_devs=() 00:09:32.234 19:40:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:32.234 19:40:13 -- nvmf/common.sh@296 -- # e810=() 00:09:32.234 19:40:13 -- nvmf/common.sh@296 -- # local -ga e810 00:09:32.234 19:40:13 -- nvmf/common.sh@297 -- # x722=() 00:09:32.234 19:40:13 -- nvmf/common.sh@297 -- # local -ga x722 00:09:32.234 19:40:13 -- nvmf/common.sh@298 -- # mlx=() 00:09:32.234 19:40:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:32.234 19:40:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.234 19:40:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:32.234 19:40:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:32.234 19:40:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:32.234 19:40:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.234 19:40:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:32.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:32.234 19:40:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:32.234 19:40:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:32.234 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:32.234 19:40:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:32.234 19:40:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.234 19:40:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.234 19:40:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:32.234 19:40:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.234 19:40:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:32.234 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:32.234 19:40:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.234 19:40:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:32.234 19:40:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.234 19:40:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:32.234 19:40:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.234 19:40:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:32.234 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:32.234 19:40:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.234 19:40:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:32.234 19:40:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:32.234 19:40:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:32.234 19:40:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:32.234 19:40:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.234 19:40:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.234 19:40:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.234 19:40:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:32.234 19:40:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.234 19:40:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.234 19:40:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:32.234 19:40:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.234 19:40:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.234 19:40:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.234 19:40:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.235 19:40:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.235 19:40:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.501 19:40:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.501 19:40:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.501 19:40:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.501 19:40:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.501 19:40:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.501 19:40:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.501 19:40:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:09:32.501 00:09:32.501 --- 10.0.0.2 ping statistics --- 00:09:32.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.501 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:09:32.501 19:40:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:09:32.501 00:09:32.501 --- 10.0.0.1 ping statistics --- 00:09:32.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.501 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:32.501 19:40:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.501 19:40:13 -- nvmf/common.sh@411 -- # return 0 00:09:32.501 19:40:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:32.501 19:40:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.501 19:40:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:32.501 19:40:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:32.501 19:40:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.501 19:40:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:32.501 19:40:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:32.501 19:40:13 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:32.501 19:40:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:32.501 19:40:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:32.501 19:40:13 -- common/autotest_common.sh@10 -- # set +x 00:09:32.501 19:40:13 -- nvmf/common.sh@470 -- # nvmfpid=1639950 00:09:32.501 19:40:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:32.501 19:40:13 -- nvmf/common.sh@471 -- # waitforlisten 1639950 00:09:32.501 19:40:13 -- common/autotest_common.sh@817 -- # '[' -z 1639950 ']' 00:09:32.501 19:40:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.501 19:40:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:32.501 19:40:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.501 19:40:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:32.501 19:40:13 -- common/autotest_common.sh@10 -- # set +x 00:09:32.501 [2024-04-24 19:40:13.933901] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:09:32.501 [2024-04-24 19:40:13.933991] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.501 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.501 [2024-04-24 19:40:14.007901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:32.788 [2024-04-24 19:40:14.127462] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.788 [2024-04-24 19:40:14.127534] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.788 [2024-04-24 19:40:14.127550] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.788 [2024-04-24 19:40:14.127563] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.788 [2024-04-24 19:40:14.127574] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.788 [2024-04-24 19:40:14.127668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.788 [2024-04-24 19:40:14.127675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.727 19:40:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:33.727 19:40:14 -- common/autotest_common.sh@850 -- # return 0 00:09:33.727 19:40:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:33.727 19:40:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:33.727 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:09:33.727 19:40:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.727 19:40:14 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.727 19:40:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.727 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:09:33.727 [2024-04-24 19:40:14.913443] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.727 19:40:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.727 19:40:14 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.727 19:40:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.727 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:09:33.727 19:40:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.727 19:40:14 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.727 19:40:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.727 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:09:33.727 [2024-04-24 19:40:14.929730] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.727 19:40:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.727 19:40:14 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:33.727 19:40:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.727 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:09:33.727 NULL1 00:09:33.727 19:40:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.727 19:40:14 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:33.727 19:40:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.727 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:09:33.727 Delay0 00:09:33.727 19:40:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.727 19:40:14 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.727 19:40:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.727 19:40:14 -- common/autotest_common.sh@10 -- # set +x 00:09:33.727 19:40:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.727 19:40:14 -- target/delete_subsystem.sh@28 -- # perf_pid=1640062 00:09:33.727 19:40:14 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:33.727 19:40:14 -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:33.727 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.727 [2024-04-24 19:40:15.004452] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:35.636 19:40:16 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.636 19:40:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:35.637 19:40:16 -- common/autotest_common.sh@10 -- # set +x 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 [2024-04-24 19:40:17.137198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc258000c00 is same with the state(5) to be set 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 starting I/O failed: -6 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.637 Write completed with error (sct=0, sc=8) 00:09:35.637 Read completed with error (sct=0, sc=8) 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Write completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 Write completed with error (sct=0, sc=8) 00:09:35.638 Write completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Write completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Write completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Write completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 Read completed with error (sct=0, sc=8) 00:09:35.638 starting I/O failed: -6 00:09:35.638 starting I/O failed: -6 00:09:35.638 starting I/O failed: -6 00:09:35.638 starting I/O failed: -6 00:09:35.638 starting I/O failed: -6 00:09:35.638 starting I/O failed: -6 00:09:35.638 starting I/O failed: -6 00:09:35.638 starting I/O failed: -6 00:09:35.638 starting I/O failed: -6 00:09:35.638 starting I/O failed: -6 00:09:37.020 [2024-04-24 19:40:18.101604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xceb120 is same with the state(5) to be set 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 [2024-04-24 19:40:18.138151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc25800c510 is same with the state(5) to be set 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 [2024-04-24 19:40:18.139135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccca10 is same with the state(5) to be set 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 [2024-04-24 19:40:18.139339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc25800bf90 is same with the state(5) to be set 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Write completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.020 Read completed with error (sct=0, sc=8) 00:09:37.021 Read completed with error (sct=0, sc=8) 00:09:37.021 Write completed with error (sct=0, sc=8) 00:09:37.021 Read completed with error (sct=0, sc=8) 00:09:37.021 Read completed with error (sct=0, sc=8) 00:09:37.021 Read completed with error (sct=0, sc=8) 00:09:37.021 Read completed with error (sct=0, sc=8) 00:09:37.021 Read completed with error (sct=0, sc=8) 00:09:37.021 Read completed with error (sct=0, sc=8) 00:09:37.021 [2024-04-24 19:40:18.139580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcccd30 is same with the state(5) to be set 00:09:37.021 [2024-04-24 19:40:18.140514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xceb120 (9): Bad file descriptor 00:09:37.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:37.021 19:40:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.021 19:40:18 -- target/delete_subsystem.sh@34 -- # delay=0 00:09:37.021 19:40:18 -- target/delete_subsystem.sh@35 -- # kill -0 1640062 00:09:37.021 19:40:18 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:37.021 Initializing NVMe Controllers 00:09:37.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:37.021 Controller IO queue size 128, less than required. 00:09:37.021 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:37.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:37.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:37.021 Initialization complete. Launching workers. 00:09:37.021 ======================================================== 00:09:37.021 Latency(us) 00:09:37.021 Device Information : IOPS MiB/s Average min max 00:09:37.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 182.01 0.09 918509.65 832.90 1014042.17 00:09:37.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.14 0.08 904742.00 708.90 1014309.72 00:09:37.021 ======================================================== 00:09:37.021 Total : 348.15 0.17 911939.62 708.90 1014309.72 00:09:37.021 00:09:37.280 19:40:18 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:37.280 19:40:18 -- target/delete_subsystem.sh@35 -- # kill -0 1640062 00:09:37.281 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1640062) - No such process 00:09:37.281 19:40:18 -- target/delete_subsystem.sh@45 -- # NOT wait 1640062 00:09:37.281 19:40:18 -- common/autotest_common.sh@638 -- # local es=0 00:09:37.281 19:40:18 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1640062 00:09:37.281 19:40:18 -- common/autotest_common.sh@626 -- # local arg=wait 00:09:37.281 19:40:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:37.281 19:40:18 -- common/autotest_common.sh@630 -- # type -t wait 00:09:37.281 19:40:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:37.281 19:40:18 -- common/autotest_common.sh@641 -- # wait 1640062 00:09:37.281 19:40:18 -- common/autotest_common.sh@641 -- # es=1 00:09:37.281 19:40:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:37.281 19:40:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:37.281 19:40:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:37.281 19:40:18 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:37.281 19:40:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.281 19:40:18 -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 19:40:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.281 19:40:18 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.281 19:40:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.281 19:40:18 -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 [2024-04-24 19:40:18.664551] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.281 19:40:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.281 19:40:18 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.281 19:40:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.281 19:40:18 -- common/autotest_common.sh@10 -- # set +x 00:09:37.281 19:40:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.281 19:40:18 -- target/delete_subsystem.sh@54 -- # perf_pid=1640580 00:09:37.281 19:40:18 -- target/delete_subsystem.sh@56 -- # delay=0 00:09:37.281 19:40:18 -- target/delete_subsystem.sh@57 -- # kill -0 1640580 00:09:37.281 19:40:18 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:37.281 19:40:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:37.281 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.281 [2024-04-24 19:40:18.727305] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:37.868 19:40:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:37.868 19:40:19 -- target/delete_subsystem.sh@57 -- # kill -0 1640580 00:09:37.868 19:40:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.441 19:40:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.441 19:40:19 -- target/delete_subsystem.sh@57 -- # kill -0 1640580 00:09:38.441 19:40:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:38.700 19:40:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:38.700 19:40:20 -- target/delete_subsystem.sh@57 -- # kill -0 1640580 00:09:38.700 19:40:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:39.270 19:40:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:39.270 19:40:20 -- target/delete_subsystem.sh@57 -- # kill -0 1640580 00:09:39.270 19:40:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:39.837 19:40:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:39.837 19:40:21 -- target/delete_subsystem.sh@57 -- # kill -0 1640580 00:09:39.837 19:40:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:40.405 19:40:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:40.405 19:40:21 -- target/delete_subsystem.sh@57 -- # kill -0 1640580 00:09:40.405 19:40:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:40.405 Initializing NVMe Controllers 00:09:40.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:40.405 Controller IO queue size 128, less than required. 00:09:40.405 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:40.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:40.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:40.405 Initialization complete. Launching workers. 00:09:40.405 ======================================================== 00:09:40.405 Latency(us) 00:09:40.405 Device Information : IOPS MiB/s Average min max 00:09:40.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004955.36 1000235.81 1043518.51 00:09:40.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004604.94 1000283.34 1042595.37 00:09:40.405 ======================================================== 00:09:40.405 Total : 256.00 0.12 1004780.15 1000235.81 1043518.51 00:09:40.405 00:09:40.973 19:40:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:40.973 19:40:22 -- target/delete_subsystem.sh@57 -- # kill -0 1640580 00:09:40.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1640580) - No such process 00:09:40.973 19:40:22 -- target/delete_subsystem.sh@67 -- # wait 1640580 00:09:40.974 19:40:22 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:40.974 19:40:22 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:40.974 19:40:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:40.974 19:40:22 -- nvmf/common.sh@117 -- # sync 00:09:40.974 19:40:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.974 19:40:22 -- nvmf/common.sh@120 -- # set +e 00:09:40.974 19:40:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.974 19:40:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.974 rmmod nvme_tcp 00:09:40.974 rmmod nvme_fabrics 00:09:40.974 rmmod nvme_keyring 00:09:40.974 19:40:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.974 19:40:22 -- nvmf/common.sh@124 -- # set -e 00:09:40.974 19:40:22 -- nvmf/common.sh@125 -- # return 0 00:09:40.974 19:40:22 -- nvmf/common.sh@478 -- # '[' -n 1639950 ']' 00:09:40.974 19:40:22 -- nvmf/common.sh@479 -- # killprocess 1639950 00:09:40.974 19:40:22 -- common/autotest_common.sh@936 -- # '[' -z 1639950 ']' 00:09:40.974 19:40:22 -- common/autotest_common.sh@940 -- # kill -0 1639950 00:09:40.974 19:40:22 -- common/autotest_common.sh@941 -- # uname 00:09:40.974 19:40:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:40.974 19:40:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1639950 00:09:40.974 19:40:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:40.974 19:40:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:40.974 19:40:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1639950' 00:09:40.974 killing process with pid 1639950 00:09:40.974 19:40:22 -- common/autotest_common.sh@955 -- # kill 1639950 00:09:40.974 19:40:22 -- common/autotest_common.sh@960 -- # wait 1639950 00:09:41.233 19:40:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:41.233 19:40:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:41.233 19:40:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:41.233 19:40:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:41.233 19:40:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:41.233 19:40:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.233 19:40:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.233 19:40:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.136 19:40:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:43.136 00:09:43.136 real 0m12.932s 00:09:43.136 user 0m29.170s 00:09:43.136 sys 0m3.026s 00:09:43.137 19:40:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:43.137 19:40:24 -- common/autotest_common.sh@10 -- # set +x 00:09:43.137 ************************************ 00:09:43.137 END TEST nvmf_delete_subsystem 00:09:43.137 ************************************ 00:09:43.137 19:40:24 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:43.137 19:40:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:43.137 19:40:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.137 19:40:24 -- common/autotest_common.sh@10 -- # set +x 00:09:43.396 ************************************ 00:09:43.396 START TEST nvmf_ns_masking 00:09:43.396 ************************************ 00:09:43.396 19:40:24 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:43.396 * Looking for test storage... 00:09:43.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.396 19:40:24 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.396 19:40:24 -- nvmf/common.sh@7 -- # uname -s 00:09:43.396 19:40:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.396 19:40:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.396 19:40:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.396 19:40:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.396 19:40:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.396 19:40:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.396 19:40:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.396 19:40:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.396 19:40:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.396 19:40:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.396 19:40:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.396 19:40:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:43.396 19:40:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.396 19:40:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.396 19:40:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.396 19:40:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.396 19:40:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.396 19:40:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.396 19:40:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.396 19:40:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.396 19:40:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.396 19:40:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.396 19:40:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.396 19:40:24 -- paths/export.sh@5 -- # export PATH 00:09:43.396 19:40:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.396 19:40:24 -- nvmf/common.sh@47 -- # : 0 00:09:43.396 19:40:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.396 19:40:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.396 19:40:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.396 19:40:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.396 19:40:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.396 19:40:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.396 19:40:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.396 19:40:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.396 19:40:24 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.396 19:40:24 -- target/ns_masking.sh@11 -- # loops=5 00:09:43.396 19:40:24 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:43.396 19:40:24 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:43.396 19:40:24 -- target/ns_masking.sh@15 -- # uuidgen 00:09:43.396 19:40:24 -- target/ns_masking.sh@15 -- # HOSTID=9cf4bbe6-0552-4dc8-bdaa-da86c7fd93ce 00:09:43.396 19:40:24 -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:43.396 19:40:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:43.396 19:40:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.396 19:40:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:43.396 19:40:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:43.396 19:40:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:43.396 19:40:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.396 19:40:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.396 19:40:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.396 19:40:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:43.396 19:40:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:43.396 19:40:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:43.396 19:40:24 -- common/autotest_common.sh@10 -- # set +x 00:09:45.304 19:40:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:45.304 19:40:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:45.304 19:40:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:45.304 19:40:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:45.304 19:40:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:45.304 19:40:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:45.304 19:40:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:45.304 19:40:26 -- nvmf/common.sh@295 -- # net_devs=() 00:09:45.304 19:40:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:45.304 19:40:26 -- nvmf/common.sh@296 -- # e810=() 00:09:45.304 19:40:26 -- nvmf/common.sh@296 -- # local -ga e810 00:09:45.304 19:40:26 -- nvmf/common.sh@297 -- # x722=() 00:09:45.304 19:40:26 -- nvmf/common.sh@297 -- # local -ga x722 00:09:45.304 19:40:26 -- nvmf/common.sh@298 -- # mlx=() 00:09:45.304 19:40:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:45.304 19:40:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.304 19:40:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:45.304 19:40:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:45.304 19:40:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:45.304 19:40:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.304 19:40:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:45.304 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:45.304 19:40:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.304 19:40:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:45.304 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:45.304 19:40:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:45.304 19:40:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.304 19:40:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.304 19:40:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:45.304 19:40:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.304 19:40:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:45.304 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:45.304 19:40:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.304 19:40:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.304 19:40:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.304 19:40:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:45.304 19:40:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.304 19:40:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:45.304 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:45.304 19:40:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.304 19:40:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:45.304 19:40:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:45.304 19:40:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:45.304 19:40:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.304 19:40:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.304 19:40:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.304 19:40:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:45.304 19:40:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.304 19:40:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.304 19:40:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:45.304 19:40:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.304 19:40:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.304 19:40:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:45.304 19:40:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:45.304 19:40:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.304 19:40:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.304 19:40:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.304 19:40:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.304 19:40:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:45.304 19:40:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.304 19:40:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.304 19:40:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.304 19:40:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:45.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:09:45.304 00:09:45.304 --- 10.0.0.2 ping statistics --- 00:09:45.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.304 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:09:45.304 19:40:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:09:45.304 00:09:45.304 --- 10.0.0.1 ping statistics --- 00:09:45.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.304 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:45.304 19:40:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.304 19:40:26 -- nvmf/common.sh@411 -- # return 0 00:09:45.304 19:40:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:45.304 19:40:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.304 19:40:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:45.304 19:40:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.304 19:40:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:45.304 19:40:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:45.565 19:40:26 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:45.565 19:40:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:45.565 19:40:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:45.565 19:40:26 -- common/autotest_common.sh@10 -- # set +x 00:09:45.565 19:40:26 -- nvmf/common.sh@470 -- # nvmfpid=1642934 00:09:45.565 19:40:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:45.565 19:40:26 -- nvmf/common.sh@471 -- # waitforlisten 1642934 00:09:45.565 19:40:26 -- common/autotest_common.sh@817 -- # '[' -z 1642934 ']' 00:09:45.565 19:40:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.565 19:40:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:45.565 19:40:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.565 19:40:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:45.565 19:40:26 -- common/autotest_common.sh@10 -- # set +x 00:09:45.565 [2024-04-24 19:40:26.871882] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:09:45.565 [2024-04-24 19:40:26.871983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.565 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.565 [2024-04-24 19:40:26.941158] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.565 [2024-04-24 19:40:27.061543] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.565 [2024-04-24 19:40:27.061606] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.565 [2024-04-24 19:40:27.061622] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.565 [2024-04-24 19:40:27.061643] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.565 [2024-04-24 19:40:27.061656] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.565 [2024-04-24 19:40:27.061729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.565 [2024-04-24 19:40:27.061798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.565 [2024-04-24 19:40:27.061849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.565 [2024-04-24 19:40:27.061852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.503 19:40:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:46.503 19:40:27 -- common/autotest_common.sh@850 -- # return 0 00:09:46.503 19:40:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:46.503 19:40:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:46.503 19:40:27 -- common/autotest_common.sh@10 -- # set +x 00:09:46.503 19:40:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.503 19:40:27 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:46.761 [2024-04-24 19:40:28.071448] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.761 19:40:28 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:46.761 19:40:28 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:46.761 19:40:28 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:47.019 Malloc1 00:09:47.019 19:40:28 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:47.277 Malloc2 00:09:47.277 19:40:28 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.535 19:40:28 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:47.793 19:40:29 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.068 [2024-04-24 19:40:29.326518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.068 19:40:29 -- target/ns_masking.sh@61 -- # connect 00:09:48.068 19:40:29 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9cf4bbe6-0552-4dc8-bdaa-da86c7fd93ce -a 10.0.0.2 -s 4420 -i 4 00:09:48.068 19:40:29 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.068 19:40:29 -- common/autotest_common.sh@1184 -- # local i=0 00:09:48.068 19:40:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.068 19:40:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:48.068 19:40:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:50.614 19:40:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:50.614 19:40:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:50.614 19:40:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.614 19:40:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:50.614 19:40:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.614 19:40:31 -- common/autotest_common.sh@1194 -- # return 0 00:09:50.614 19:40:31 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:50.614 19:40:31 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:50.614 19:40:31 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:50.614 19:40:31 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:50.614 19:40:31 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:50.614 19:40:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:50.614 19:40:31 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:50.614 [ 0]:0x1 00:09:50.614 19:40:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:50.614 19:40:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:50.614 19:40:31 -- target/ns_masking.sh@40 -- # nguid=389943fc653241118b6adc7391016333 00:09:50.614 19:40:31 -- target/ns_masking.sh@41 -- # [[ 389943fc653241118b6adc7391016333 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:50.614 19:40:31 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:50.614 19:40:31 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:50.614 19:40:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:50.614 19:40:31 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:50.614 [ 0]:0x1 00:09:50.614 19:40:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:50.614 19:40:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:50.614 19:40:32 -- target/ns_masking.sh@40 -- # nguid=389943fc653241118b6adc7391016333 00:09:50.614 19:40:32 -- target/ns_masking.sh@41 -- # [[ 389943fc653241118b6adc7391016333 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:50.614 19:40:32 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:50.614 19:40:32 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:50.614 19:40:32 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:50.614 [ 1]:0x2 00:09:50.614 19:40:32 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:50.614 19:40:32 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:50.614 19:40:32 -- target/ns_masking.sh@40 -- # nguid=a1a5c0d101724eb591e21fc88a72da99 00:09:50.614 19:40:32 -- target/ns_masking.sh@41 -- # [[ a1a5c0d101724eb591e21fc88a72da99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:50.614 19:40:32 -- target/ns_masking.sh@69 -- # disconnect 00:09:50.614 19:40:32 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.614 19:40:32 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.871 19:40:32 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:51.130 19:40:32 -- target/ns_masking.sh@77 -- # connect 1 00:09:51.130 19:40:32 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9cf4bbe6-0552-4dc8-bdaa-da86c7fd93ce -a 10.0.0.2 -s 4420 -i 4 00:09:51.390 19:40:32 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:51.390 19:40:32 -- common/autotest_common.sh@1184 -- # local i=0 00:09:51.390 19:40:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.390 19:40:32 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:09:51.390 19:40:32 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:09:51.390 19:40:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:53.922 19:40:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:53.922 19:40:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:53.922 19:40:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:53.922 19:40:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:53.922 19:40:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:53.922 19:40:34 -- common/autotest_common.sh@1194 -- # return 0 00:09:53.922 19:40:34 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:53.922 19:40:34 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:53.922 19:40:34 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:53.922 19:40:34 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:53.922 19:40:34 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:53.922 19:40:34 -- common/autotest_common.sh@638 -- # local es=0 00:09:53.922 19:40:34 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:53.922 19:40:34 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:53.922 19:40:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.922 19:40:34 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:53.922 19:40:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:53.922 19:40:34 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:53.922 19:40:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.922 19:40:34 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:53.922 19:40:34 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:53.922 19:40:34 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:53.922 19:40:34 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:53.922 19:40:34 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.922 19:40:34 -- common/autotest_common.sh@641 -- # es=1 00:09:53.922 19:40:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:53.922 19:40:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:53.922 19:40:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:53.922 19:40:34 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:53.922 19:40:34 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.922 19:40:34 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:53.922 [ 0]:0x2 00:09:53.922 19:40:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:53.922 19:40:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:53.922 19:40:35 -- target/ns_masking.sh@40 -- # nguid=a1a5c0d101724eb591e21fc88a72da99 00:09:53.922 19:40:35 -- target/ns_masking.sh@41 -- # [[ a1a5c0d101724eb591e21fc88a72da99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.922 19:40:35 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:53.922 19:40:35 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:53.922 19:40:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.922 19:40:35 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:53.922 [ 0]:0x1 00:09:53.922 19:40:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:53.922 19:40:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:53.922 19:40:35 -- target/ns_masking.sh@40 -- # nguid=389943fc653241118b6adc7391016333 00:09:53.922 19:40:35 -- target/ns_masking.sh@41 -- # [[ 389943fc653241118b6adc7391016333 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:53.922 19:40:35 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:53.922 19:40:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:53.922 19:40:35 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:54.178 [ 1]:0x2 00:09:54.178 19:40:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:54.178 19:40:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:54.178 19:40:35 -- target/ns_masking.sh@40 -- # nguid=a1a5c0d101724eb591e21fc88a72da99 00:09:54.178 19:40:35 -- target/ns_masking.sh@41 -- # [[ a1a5c0d101724eb591e21fc88a72da99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.178 19:40:35 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:54.436 19:40:35 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:54.436 19:40:35 -- common/autotest_common.sh@638 -- # local es=0 00:09:54.436 19:40:35 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:54.436 19:40:35 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:54.436 19:40:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.436 19:40:35 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:54.436 19:40:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.436 19:40:35 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:54.436 19:40:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:54.436 19:40:35 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:54.436 19:40:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:54.436 19:40:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:54.436 19:40:35 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:54.436 19:40:35 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.436 19:40:35 -- common/autotest_common.sh@641 -- # es=1 00:09:54.436 19:40:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:54.436 19:40:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:54.436 19:40:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:54.436 19:40:35 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:54.436 19:40:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:54.436 19:40:35 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:54.436 [ 0]:0x2 00:09:54.436 19:40:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:54.436 19:40:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:54.436 19:40:35 -- target/ns_masking.sh@40 -- # nguid=a1a5c0d101724eb591e21fc88a72da99 00:09:54.436 19:40:35 -- target/ns_masking.sh@41 -- # [[ a1a5c0d101724eb591e21fc88a72da99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:54.436 19:40:35 -- target/ns_masking.sh@91 -- # disconnect 00:09:54.436 19:40:35 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.436 19:40:35 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:54.695 19:40:36 -- target/ns_masking.sh@95 -- # connect 2 00:09:54.695 19:40:36 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9cf4bbe6-0552-4dc8-bdaa-da86c7fd93ce -a 10.0.0.2 -s 4420 -i 4 00:09:54.954 19:40:36 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:54.954 19:40:36 -- common/autotest_common.sh@1184 -- # local i=0 00:09:54.954 19:40:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.954 19:40:36 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:54.954 19:40:36 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:54.954 19:40:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:56.859 19:40:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:56.859 19:40:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:56.859 19:40:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:56.859 19:40:38 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:56.859 19:40:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:56.859 19:40:38 -- common/autotest_common.sh@1194 -- # return 0 00:09:56.859 19:40:38 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:56.859 19:40:38 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:56.859 19:40:38 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:56.859 19:40:38 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:56.859 19:40:38 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:56.859 19:40:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:56.859 19:40:38 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:57.117 [ 0]:0x1 00:09:57.117 19:40:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:57.117 19:40:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:57.117 19:40:38 -- target/ns_masking.sh@40 -- # nguid=389943fc653241118b6adc7391016333 00:09:57.117 19:40:38 -- target/ns_masking.sh@41 -- # [[ 389943fc653241118b6adc7391016333 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.117 19:40:38 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:57.117 19:40:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:57.117 19:40:38 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:57.117 [ 1]:0x2 00:09:57.117 19:40:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:57.117 19:40:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:57.117 19:40:38 -- target/ns_masking.sh@40 -- # nguid=a1a5c0d101724eb591e21fc88a72da99 00:09:57.117 19:40:38 -- target/ns_masking.sh@41 -- # [[ a1a5c0d101724eb591e21fc88a72da99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.117 19:40:38 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:57.375 19:40:38 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:57.375 19:40:38 -- common/autotest_common.sh@638 -- # local es=0 00:09:57.375 19:40:38 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:57.375 19:40:38 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:57.375 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:57.375 19:40:38 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:57.375 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:57.375 19:40:38 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:57.375 19:40:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:57.375 19:40:38 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:57.375 19:40:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:57.375 19:40:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:57.375 19:40:38 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:57.375 19:40:38 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.375 19:40:38 -- common/autotest_common.sh@641 -- # es=1 00:09:57.375 19:40:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:57.375 19:40:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:57.375 19:40:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:57.375 19:40:38 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:57.375 19:40:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:57.375 19:40:38 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:57.375 [ 0]:0x2 00:09:57.375 19:40:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:57.375 19:40:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:57.634 19:40:38 -- target/ns_masking.sh@40 -- # nguid=a1a5c0d101724eb591e21fc88a72da99 00:09:57.634 19:40:38 -- target/ns_masking.sh@41 -- # [[ a1a5c0d101724eb591e21fc88a72da99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.635 19:40:38 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:57.635 19:40:38 -- common/autotest_common.sh@638 -- # local es=0 00:09:57.635 19:40:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:57.635 19:40:38 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.635 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:57.635 19:40:38 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.635 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:57.635 19:40:38 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.635 19:40:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:57.635 19:40:38 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.635 19:40:38 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:57.635 19:40:38 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:57.635 [2024-04-24 19:40:39.138063] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:57.635 request: 00:09:57.635 { 00:09:57.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.635 "nsid": 2, 00:09:57.635 "host": "nqn.2016-06.io.spdk:host1", 00:09:57.635 "method": "nvmf_ns_remove_host", 00:09:57.635 "req_id": 1 00:09:57.635 } 00:09:57.635 Got JSON-RPC error response 00:09:57.635 response: 00:09:57.635 { 00:09:57.635 "code": -32602, 00:09:57.635 "message": "Invalid parameters" 00:09:57.635 } 00:09:57.893 19:40:39 -- common/autotest_common.sh@641 -- # es=1 00:09:57.893 19:40:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:57.893 19:40:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:57.893 19:40:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:57.893 19:40:39 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:57.893 19:40:39 -- common/autotest_common.sh@638 -- # local es=0 00:09:57.893 19:40:39 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:57.893 19:40:39 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:57.893 19:40:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:57.893 19:40:39 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:57.894 19:40:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:57.894 19:40:39 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:57.894 19:40:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:57.894 19:40:39 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:57.894 19:40:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:57.894 19:40:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:57.894 19:40:39 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:57.894 19:40:39 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.894 19:40:39 -- common/autotest_common.sh@641 -- # es=1 00:09:57.894 19:40:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:57.894 19:40:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:57.894 19:40:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:57.894 19:40:39 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:57.894 19:40:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:57.894 19:40:39 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:57.894 [ 0]:0x2 00:09:57.894 19:40:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:57.894 19:40:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:57.894 19:40:39 -- target/ns_masking.sh@40 -- # nguid=a1a5c0d101724eb591e21fc88a72da99 00:09:57.894 19:40:39 -- target/ns_masking.sh@41 -- # [[ a1a5c0d101724eb591e21fc88a72da99 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:57.894 19:40:39 -- target/ns_masking.sh@108 -- # disconnect 00:09:57.894 19:40:39 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.894 19:40:39 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.152 19:40:39 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:58.152 19:40:39 -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:58.152 19:40:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:58.152 19:40:39 -- nvmf/common.sh@117 -- # sync 00:09:58.152 19:40:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.152 19:40:39 -- nvmf/common.sh@120 -- # set +e 00:09:58.152 19:40:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.152 19:40:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.152 rmmod nvme_tcp 00:09:58.152 rmmod nvme_fabrics 00:09:58.152 rmmod nvme_keyring 00:09:58.152 19:40:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.152 19:40:39 -- nvmf/common.sh@124 -- # set -e 00:09:58.152 19:40:39 -- nvmf/common.sh@125 -- # return 0 00:09:58.152 19:40:39 -- nvmf/common.sh@478 -- # '[' -n 1642934 ']' 00:09:58.152 19:40:39 -- nvmf/common.sh@479 -- # killprocess 1642934 00:09:58.152 19:40:39 -- common/autotest_common.sh@936 -- # '[' -z 1642934 ']' 00:09:58.152 19:40:39 -- common/autotest_common.sh@940 -- # kill -0 1642934 00:09:58.152 19:40:39 -- common/autotest_common.sh@941 -- # uname 00:09:58.152 19:40:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:58.152 19:40:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1642934 00:09:58.152 19:40:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:58.152 19:40:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:58.152 19:40:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1642934' 00:09:58.152 killing process with pid 1642934 00:09:58.152 19:40:39 -- common/autotest_common.sh@955 -- # kill 1642934 00:09:58.152 19:40:39 -- common/autotest_common.sh@960 -- # wait 1642934 00:09:58.411 19:40:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:58.411 19:40:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:58.411 19:40:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:58.411 19:40:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.411 19:40:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.411 19:40:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.411 19:40:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.411 19:40:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.952 19:40:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.952 00:10:00.952 real 0m17.231s 00:10:00.952 user 0m54.436s 00:10:00.952 sys 0m3.793s 00:10:00.952 19:40:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:00.952 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:10:00.952 ************************************ 00:10:00.952 END TEST nvmf_ns_masking 00:10:00.952 ************************************ 00:10:00.952 19:40:41 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:00.952 19:40:41 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:00.952 19:40:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:00.952 19:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:00.952 19:40:41 -- common/autotest_common.sh@10 -- # set +x 00:10:00.952 ************************************ 00:10:00.952 START TEST nvmf_nvme_cli 00:10:00.952 ************************************ 00:10:00.952 19:40:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:00.952 * Looking for test storage... 00:10:00.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.953 19:40:42 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.953 19:40:42 -- nvmf/common.sh@7 -- # uname -s 00:10:00.953 19:40:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.953 19:40:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.953 19:40:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.953 19:40:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.953 19:40:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.953 19:40:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.953 19:40:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.953 19:40:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.953 19:40:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.953 19:40:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.953 19:40:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.953 19:40:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.953 19:40:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.953 19:40:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.953 19:40:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.953 19:40:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.953 19:40:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.953 19:40:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.953 19:40:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.953 19:40:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.953 19:40:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.953 19:40:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.953 19:40:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.953 19:40:42 -- paths/export.sh@5 -- # export PATH 00:10:00.953 19:40:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.953 19:40:42 -- nvmf/common.sh@47 -- # : 0 00:10:00.953 19:40:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.953 19:40:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.953 19:40:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.953 19:40:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.953 19:40:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.953 19:40:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.953 19:40:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.953 19:40:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.953 19:40:42 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.953 19:40:42 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.953 19:40:42 -- target/nvme_cli.sh@14 -- # devs=() 00:10:00.953 19:40:42 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:00.953 19:40:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:00.953 19:40:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.953 19:40:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:00.953 19:40:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:00.953 19:40:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:00.953 19:40:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.953 19:40:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.953 19:40:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.953 19:40:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:00.953 19:40:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:00.953 19:40:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.953 19:40:42 -- common/autotest_common.sh@10 -- # set +x 00:10:02.858 19:40:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:02.858 19:40:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.858 19:40:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.858 19:40:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.858 19:40:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.858 19:40:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.858 19:40:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.858 19:40:44 -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.858 19:40:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.858 19:40:44 -- nvmf/common.sh@296 -- # e810=() 00:10:02.858 19:40:44 -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.858 19:40:44 -- nvmf/common.sh@297 -- # x722=() 00:10:02.858 19:40:44 -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.858 19:40:44 -- nvmf/common.sh@298 -- # mlx=() 00:10:02.858 19:40:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.858 19:40:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.858 19:40:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.858 19:40:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.858 19:40:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.858 19:40:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.858 19:40:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:02.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:02.858 19:40:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.858 19:40:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:02.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:02.858 19:40:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.858 19:40:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.858 19:40:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.858 19:40:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:02.858 19:40:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.858 19:40:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:02.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:02.858 19:40:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.858 19:40:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.858 19:40:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.858 19:40:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:02.858 19:40:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.858 19:40:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:02.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:02.858 19:40:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.858 19:40:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:02.858 19:40:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:02.858 19:40:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:02.858 19:40:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.858 19:40:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.858 19:40:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.858 19:40:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.858 19:40:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.858 19:40:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.858 19:40:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.858 19:40:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.858 19:40:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.858 19:40:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.858 19:40:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.858 19:40:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.858 19:40:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.858 19:40:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.858 19:40:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.858 19:40:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.858 19:40:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.858 19:40:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.858 19:40:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.858 19:40:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:10:02.858 00:10:02.858 --- 10.0.0.2 ping statistics --- 00:10:02.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.858 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:10:02.858 19:40:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:10:02.858 00:10:02.858 --- 10.0.0.1 ping statistics --- 00:10:02.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.858 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:02.858 19:40:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.858 19:40:44 -- nvmf/common.sh@411 -- # return 0 00:10:02.858 19:40:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:02.858 19:40:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.858 19:40:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:02.858 19:40:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.858 19:40:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:02.858 19:40:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:02.858 19:40:44 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:02.858 19:40:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:02.858 19:40:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:02.858 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:10:02.858 19:40:44 -- nvmf/common.sh@470 -- # nvmfpid=1646508 00:10:02.858 19:40:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.858 19:40:44 -- nvmf/common.sh@471 -- # waitforlisten 1646508 00:10:02.858 19:40:44 -- common/autotest_common.sh@817 -- # '[' -z 1646508 ']' 00:10:02.859 19:40:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.859 19:40:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:02.859 19:40:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.859 19:40:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:02.859 19:40:44 -- common/autotest_common.sh@10 -- # set +x 00:10:02.859 [2024-04-24 19:40:44.355009] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:10:02.859 [2024-04-24 19:40:44.355096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.117 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.117 [2024-04-24 19:40:44.426008] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.117 [2024-04-24 19:40:44.547404] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.117 [2024-04-24 19:40:44.547482] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.117 [2024-04-24 19:40:44.547498] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.118 [2024-04-24 19:40:44.547512] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.118 [2024-04-24 19:40:44.547526] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.118 [2024-04-24 19:40:44.547625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.118 [2024-04-24 19:40:44.547684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.118 [2024-04-24 19:40:44.547738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.118 [2024-04-24 19:40:44.547742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.054 19:40:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:04.054 19:40:45 -- common/autotest_common.sh@850 -- # return 0 00:10:04.054 19:40:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:04.054 19:40:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:04.054 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:10:04.054 19:40:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.054 19:40:45 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.054 19:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.054 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:10:04.054 [2024-04-24 19:40:45.368782] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.054 19:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.054 19:40:45 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.054 19:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.054 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:10:04.054 Malloc0 00:10:04.054 19:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.054 19:40:45 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:04.054 19:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.054 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:10:04.054 Malloc1 00:10:04.054 19:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.054 19:40:45 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:04.054 19:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.054 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:10:04.054 19:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.054 19:40:45 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.054 19:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.054 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:10:04.054 19:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.054 19:40:45 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.054 19:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.054 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:10:04.054 19:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.054 19:40:45 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.054 19:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.054 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:10:04.054 [2024-04-24 19:40:45.455332] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.054 19:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.054 19:40:45 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.054 19:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:04.054 19:40:45 -- common/autotest_common.sh@10 -- # set +x 00:10:04.054 19:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:04.055 19:40:45 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:04.313 00:10:04.313 Discovery Log Number of Records 2, Generation counter 2 00:10:04.313 =====Discovery Log Entry 0====== 00:10:04.313 trtype: tcp 00:10:04.313 adrfam: ipv4 00:10:04.313 subtype: current discovery subsystem 00:10:04.313 treq: not required 00:10:04.313 portid: 0 00:10:04.313 trsvcid: 4420 00:10:04.313 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:04.313 traddr: 10.0.0.2 00:10:04.313 eflags: explicit discovery connections, duplicate discovery information 00:10:04.313 sectype: none 00:10:04.313 =====Discovery Log Entry 1====== 00:10:04.313 trtype: tcp 00:10:04.313 adrfam: ipv4 00:10:04.313 subtype: nvme subsystem 00:10:04.313 treq: not required 00:10:04.313 portid: 0 00:10:04.313 trsvcid: 4420 00:10:04.313 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:04.313 traddr: 10.0.0.2 00:10:04.313 eflags: none 00:10:04.313 sectype: none 00:10:04.313 19:40:45 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:04.313 19:40:45 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:04.313 19:40:45 -- nvmf/common.sh@511 -- # local dev _ 00:10:04.313 19:40:45 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:04.313 19:40:45 -- nvmf/common.sh@510 -- # nvme list 00:10:04.313 19:40:45 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:04.313 19:40:45 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:04.313 19:40:45 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:04.313 19:40:45 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:04.313 19:40:45 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:04.313 19:40:45 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.937 19:40:46 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:04.937 19:40:46 -- common/autotest_common.sh@1184 -- # local i=0 00:10:04.937 19:40:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.937 19:40:46 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:04.937 19:40:46 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:04.937 19:40:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:06.845 19:40:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:06.845 19:40:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:06.845 19:40:48 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.845 19:40:48 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:06.845 19:40:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.845 19:40:48 -- common/autotest_common.sh@1194 -- # return 0 00:10:06.845 19:40:48 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:06.845 19:40:48 -- nvmf/common.sh@511 -- # local dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@510 -- # nvme list 00:10:06.845 19:40:48 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:06.845 19:40:48 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:06.845 19:40:48 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:06.845 /dev/nvme0n1 ]] 00:10:06.845 19:40:48 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:06.845 19:40:48 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:06.845 19:40:48 -- nvmf/common.sh@511 -- # local dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@510 -- # nvme list 00:10:06.845 19:40:48 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:06.845 19:40:48 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:06.845 19:40:48 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:06.845 19:40:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:06.845 19:40:48 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:06.845 19:40:48 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:07.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.105 19:40:48 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:07.105 19:40:48 -- common/autotest_common.sh@1205 -- # local i=0 00:10:07.105 19:40:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:07.105 19:40:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.105 19:40:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:07.105 19:40:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:07.105 19:40:48 -- common/autotest_common.sh@1217 -- # return 0 00:10:07.105 19:40:48 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:07.105 19:40:48 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:07.105 19:40:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.105 19:40:48 -- common/autotest_common.sh@10 -- # set +x 00:10:07.105 19:40:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.105 19:40:48 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:07.105 19:40:48 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:07.105 19:40:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:07.105 19:40:48 -- nvmf/common.sh@117 -- # sync 00:10:07.105 19:40:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.105 19:40:48 -- nvmf/common.sh@120 -- # set +e 00:10:07.105 19:40:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.105 19:40:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.105 rmmod nvme_tcp 00:10:07.105 rmmod nvme_fabrics 00:10:07.105 rmmod nvme_keyring 00:10:07.105 19:40:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.105 19:40:48 -- nvmf/common.sh@124 -- # set -e 00:10:07.105 19:40:48 -- nvmf/common.sh@125 -- # return 0 00:10:07.105 19:40:48 -- nvmf/common.sh@478 -- # '[' -n 1646508 ']' 00:10:07.105 19:40:48 -- nvmf/common.sh@479 -- # killprocess 1646508 00:10:07.105 19:40:48 -- common/autotest_common.sh@936 -- # '[' -z 1646508 ']' 00:10:07.105 19:40:48 -- common/autotest_common.sh@940 -- # kill -0 1646508 00:10:07.105 19:40:48 -- common/autotest_common.sh@941 -- # uname 00:10:07.105 19:40:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:07.105 19:40:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1646508 00:10:07.105 19:40:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:07.105 19:40:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:07.105 19:40:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1646508' 00:10:07.105 killing process with pid 1646508 00:10:07.105 19:40:48 -- common/autotest_common.sh@955 -- # kill 1646508 00:10:07.105 19:40:48 -- common/autotest_common.sh@960 -- # wait 1646508 00:10:07.363 19:40:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:07.363 19:40:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:07.363 19:40:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:07.363 19:40:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.363 19:40:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.363 19:40:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.363 19:40:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:07.363 19:40:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.903 19:40:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.903 00:10:09.903 real 0m8.832s 00:10:09.903 user 0m17.814s 00:10:09.903 sys 0m2.223s 00:10:09.903 19:40:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:09.903 19:40:50 -- common/autotest_common.sh@10 -- # set +x 00:10:09.903 ************************************ 00:10:09.903 END TEST nvmf_nvme_cli 00:10:09.903 ************************************ 00:10:09.903 19:40:50 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:09.903 19:40:50 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:09.903 19:40:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:09.903 19:40:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:09.903 19:40:50 -- common/autotest_common.sh@10 -- # set +x 00:10:09.903 ************************************ 00:10:09.903 START TEST nvmf_vfio_user 00:10:09.903 ************************************ 00:10:09.903 19:40:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:09.903 * Looking for test storage... 00:10:09.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.903 19:40:51 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.903 19:40:51 -- nvmf/common.sh@7 -- # uname -s 00:10:09.903 19:40:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.903 19:40:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.903 19:40:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.903 19:40:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.903 19:40:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.903 19:40:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.903 19:40:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.903 19:40:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.903 19:40:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.903 19:40:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.903 19:40:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.903 19:40:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.903 19:40:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.903 19:40:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.903 19:40:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.903 19:40:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.903 19:40:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.903 19:40:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.903 19:40:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.903 19:40:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.903 19:40:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.903 19:40:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.903 19:40:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.903 19:40:51 -- paths/export.sh@5 -- # export PATH 00:10:09.903 19:40:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.903 19:40:51 -- nvmf/common.sh@47 -- # : 0 00:10:09.903 19:40:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.903 19:40:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.903 19:40:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.903 19:40:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.903 19:40:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.903 19:40:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.903 19:40:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.903 19:40:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.903 19:40:51 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:09.903 19:40:51 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:09.903 19:40:51 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:09.903 19:40:51 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:09.903 19:40:51 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:09.903 19:40:51 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:09.904 19:40:51 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:09.904 19:40:51 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:09.904 19:40:51 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:09.904 19:40:51 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:09.904 19:40:51 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1647453 00:10:09.904 19:40:51 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:09.904 19:40:51 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1647453' 00:10:09.904 Process pid: 1647453 00:10:09.904 19:40:51 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:09.904 19:40:51 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1647453 00:10:09.904 19:40:51 -- common/autotest_common.sh@817 -- # '[' -z 1647453 ']' 00:10:09.904 19:40:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.904 19:40:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:09.904 19:40:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.904 19:40:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:09.904 19:40:51 -- common/autotest_common.sh@10 -- # set +x 00:10:09.904 [2024-04-24 19:40:51.163783] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:10:09.904 [2024-04-24 19:40:51.163867] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.904 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.904 [2024-04-24 19:40:51.231394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.904 [2024-04-24 19:40:51.354604] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.904 [2024-04-24 19:40:51.354684] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.904 [2024-04-24 19:40:51.354702] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.904 [2024-04-24 19:40:51.354716] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.904 [2024-04-24 19:40:51.354728] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.904 [2024-04-24 19:40:51.354791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.904 [2024-04-24 19:40:51.358650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.904 [2024-04-24 19:40:51.358700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.904 [2024-04-24 19:40:51.358706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.161 19:40:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:10.161 19:40:51 -- common/autotest_common.sh@850 -- # return 0 00:10:10.161 19:40:51 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:11.098 19:40:52 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:11.355 19:40:52 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:11.355 19:40:52 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:11.355 19:40:52 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:11.355 19:40:52 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:11.355 19:40:52 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:11.613 Malloc1 00:10:11.613 19:40:53 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:11.870 19:40:53 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:12.127 19:40:53 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:12.385 19:40:53 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:12.385 19:40:53 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:12.385 19:40:53 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:12.643 Malloc2 00:10:12.643 19:40:54 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:12.901 19:40:54 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:13.158 19:40:54 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:13.418 19:40:54 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:13.418 19:40:54 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:13.418 19:40:54 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:13.418 19:40:54 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:13.418 19:40:54 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:13.418 19:40:54 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:13.418 [2024-04-24 19:40:54.765299] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:10:13.418 [2024-04-24 19:40:54.765342] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647989 ] 00:10:13.418 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.418 [2024-04-24 19:40:54.798046] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:13.418 [2024-04-24 19:40:54.807026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:13.418 [2024-04-24 19:40:54.807053] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f544bb54000 00:10:13.418 [2024-04-24 19:40:54.808016] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.418 [2024-04-24 19:40:54.809010] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.418 [2024-04-24 19:40:54.810016] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.418 [2024-04-24 19:40:54.811022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:13.419 [2024-04-24 19:40:54.812025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:13.419 [2024-04-24 19:40:54.813031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.419 [2024-04-24 19:40:54.814040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:13.419 [2024-04-24 19:40:54.815045] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:13.419 [2024-04-24 19:40:54.816057] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:13.419 [2024-04-24 19:40:54.816080] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f544bb49000 00:10:13.419 [2024-04-24 19:40:54.817220] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:13.419 [2024-04-24 19:40:54.832270] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:13.419 [2024-04-24 19:40:54.832308] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:13.419 [2024-04-24 19:40:54.837182] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:13.419 [2024-04-24 19:40:54.837241] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:13.419 [2024-04-24 19:40:54.837338] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:13.419 [2024-04-24 19:40:54.837372] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:13.419 [2024-04-24 19:40:54.837382] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:13.419 [2024-04-24 19:40:54.838173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:13.419 [2024-04-24 19:40:54.838192] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:13.419 [2024-04-24 19:40:54.838205] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:13.419 [2024-04-24 19:40:54.839175] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:13.419 [2024-04-24 19:40:54.839194] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:13.419 [2024-04-24 19:40:54.839209] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:13.419 [2024-04-24 19:40:54.842637] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:13.419 [2024-04-24 19:40:54.842658] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:13.419 [2024-04-24 19:40:54.843191] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:13.419 [2024-04-24 19:40:54.843209] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:13.419 [2024-04-24 19:40:54.843222] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:13.419 [2024-04-24 19:40:54.843234] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:13.419 [2024-04-24 19:40:54.843343] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:13.419 [2024-04-24 19:40:54.843351] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:13.419 [2024-04-24 19:40:54.843359] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:13.419 [2024-04-24 19:40:54.844202] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:13.419 [2024-04-24 19:40:54.845205] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:13.419 [2024-04-24 19:40:54.846210] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:13.419 [2024-04-24 19:40:54.847203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:13.419 [2024-04-24 19:40:54.847310] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:13.419 [2024-04-24 19:40:54.848219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:13.419 [2024-04-24 19:40:54.848237] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:13.419 [2024-04-24 19:40:54.848246] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848270] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:13.419 [2024-04-24 19:40:54.848283] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848313] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:13.419 [2024-04-24 19:40:54.848323] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.419 [2024-04-24 19:40:54.848346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.419 [2024-04-24 19:40:54.848423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:13.419 [2024-04-24 19:40:54.848440] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:13.419 [2024-04-24 19:40:54.848448] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:13.419 [2024-04-24 19:40:54.848456] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:13.419 [2024-04-24 19:40:54.848463] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:13.419 [2024-04-24 19:40:54.848471] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:13.419 [2024-04-24 19:40:54.848478] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:13.419 [2024-04-24 19:40:54.848485] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848502] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:13.419 [2024-04-24 19:40:54.848534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:13.419 [2024-04-24 19:40:54.848558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.419 [2024-04-24 19:40:54.848572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.419 [2024-04-24 19:40:54.848583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.419 [2024-04-24 19:40:54.848595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.419 [2024-04-24 19:40:54.848603] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848640] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:13.419 [2024-04-24 19:40:54.848680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:13.419 [2024-04-24 19:40:54.848691] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:13.419 [2024-04-24 19:40:54.848699] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848714] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848725] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848738] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:13.419 [2024-04-24 19:40:54.848752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:13.419 [2024-04-24 19:40:54.848803] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848818] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848831] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:13.419 [2024-04-24 19:40:54.848839] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:13.419 [2024-04-24 19:40:54.848849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:13.419 [2024-04-24 19:40:54.848865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:13.419 [2024-04-24 19:40:54.848883] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:13.419 [2024-04-24 19:40:54.848902] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848918] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:13.419 [2024-04-24 19:40:54.848945] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:13.419 [2024-04-24 19:40:54.848953] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.419 [2024-04-24 19:40:54.848970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.420 [2024-04-24 19:40:54.848990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:13.420 [2024-04-24 19:40:54.849014] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:13.420 [2024-04-24 19:40:54.849028] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:13.420 [2024-04-24 19:40:54.849039] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:13.420 [2024-04-24 19:40:54.849047] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.420 [2024-04-24 19:40:54.849056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.420 [2024-04-24 19:40:54.849067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:13.420 [2024-04-24 19:40:54.849082] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:13.420 [2024-04-24 19:40:54.849093] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:13.420 [2024-04-24 19:40:54.849108] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:13.420 [2024-04-24 19:40:54.849118] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:13.420 [2024-04-24 19:40:54.849127] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:13.420 [2024-04-24 19:40:54.849136] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:13.420 [2024-04-24 19:40:54.849143] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:13.420 [2024-04-24 19:40:54.849151] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:13.420 [2024-04-24 19:40:54.849178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:13.420 [2024-04-24 19:40:54.849197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:13.420 [2024-04-24 19:40:54.849216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:13.420 [2024-04-24 19:40:54.849228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:13.420 [2024-04-24 19:40:54.849243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:13.420 [2024-04-24 19:40:54.849254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:13.420 [2024-04-24 19:40:54.849273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:13.420 [2024-04-24 19:40:54.849285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:13.420 [2024-04-24 19:40:54.849303] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:13.420 [2024-04-24 19:40:54.849312] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:13.420 [2024-04-24 19:40:54.849317] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:13.420 [2024-04-24 19:40:54.849323] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:13.420 [2024-04-24 19:40:54.849332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:13.420 [2024-04-24 19:40:54.849344] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:13.420 [2024-04-24 19:40:54.849352] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:13.420 [2024-04-24 19:40:54.849360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:13.420 [2024-04-24 19:40:54.849371] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:13.420 [2024-04-24 19:40:54.849379] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:13.420 [2024-04-24 19:40:54.849388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:13.420 [2024-04-24 19:40:54.849400] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:13.420 [2024-04-24 19:40:54.849408] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:13.420 [2024-04-24 19:40:54.849416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:13.420 [2024-04-24 19:40:54.849428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:13.420 [2024-04-24 19:40:54.849449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:13.420 [2024-04-24 19:40:54.849464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:13.420 [2024-04-24 19:40:54.849475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:13.420 ===================================================== 00:10:13.420 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:13.420 ===================================================== 00:10:13.420 Controller Capabilities/Features 00:10:13.420 ================================ 00:10:13.420 Vendor ID: 4e58 00:10:13.420 Subsystem Vendor ID: 4e58 00:10:13.420 Serial Number: SPDK1 00:10:13.420 Model Number: SPDK bdev Controller 00:10:13.420 Firmware Version: 24.05 00:10:13.420 Recommended Arb Burst: 6 00:10:13.420 IEEE OUI Identifier: 8d 6b 50 00:10:13.420 Multi-path I/O 00:10:13.420 May have multiple subsystem ports: Yes 00:10:13.420 May have multiple controllers: Yes 00:10:13.420 Associated with SR-IOV VF: No 00:10:13.420 Max Data Transfer Size: 131072 00:10:13.420 Max Number of Namespaces: 32 00:10:13.420 Max Number of I/O Queues: 127 00:10:13.420 NVMe Specification Version (VS): 1.3 00:10:13.420 NVMe Specification Version (Identify): 1.3 00:10:13.420 Maximum Queue Entries: 256 00:10:13.420 Contiguous Queues Required: Yes 00:10:13.420 Arbitration Mechanisms Supported 00:10:13.420 Weighted Round Robin: Not Supported 00:10:13.420 Vendor Specific: Not Supported 00:10:13.420 Reset Timeout: 15000 ms 00:10:13.420 Doorbell Stride: 4 bytes 00:10:13.420 NVM Subsystem Reset: Not Supported 00:10:13.420 Command Sets Supported 00:10:13.420 NVM Command Set: Supported 00:10:13.420 Boot Partition: Not Supported 00:10:13.420 Memory Page Size Minimum: 4096 bytes 00:10:13.420 Memory Page Size Maximum: 4096 bytes 00:10:13.420 Persistent Memory Region: Not Supported 00:10:13.420 Optional Asynchronous Events Supported 00:10:13.420 Namespace Attribute Notices: Supported 00:10:13.420 Firmware Activation Notices: Not Supported 00:10:13.420 ANA Change Notices: Not Supported 00:10:13.420 PLE Aggregate Log Change Notices: Not Supported 00:10:13.420 LBA Status Info Alert Notices: Not Supported 00:10:13.420 EGE Aggregate Log Change Notices: Not Supported 00:10:13.420 Normal NVM Subsystem Shutdown event: Not Supported 00:10:13.420 Zone Descriptor Change Notices: Not Supported 00:10:13.420 Discovery Log Change Notices: Not Supported 00:10:13.420 Controller Attributes 00:10:13.420 128-bit Host Identifier: Supported 00:10:13.420 Non-Operational Permissive Mode: Not Supported 00:10:13.420 NVM Sets: Not Supported 00:10:13.420 Read Recovery Levels: Not Supported 00:10:13.421 Endurance Groups: Not Supported 00:10:13.421 Predictable Latency Mode: Not Supported 00:10:13.421 Traffic Based Keep ALive: Not Supported 00:10:13.421 Namespace Granularity: Not Supported 00:10:13.421 SQ Associations: Not Supported 00:10:13.421 UUID List: Not Supported 00:10:13.421 Multi-Domain Subsystem: Not Supported 00:10:13.421 Fixed Capacity Management: Not Supported 00:10:13.421 Variable Capacity Management: Not Supported 00:10:13.421 Delete Endurance Group: Not Supported 00:10:13.421 Delete NVM Set: Not Supported 00:10:13.421 Extended LBA Formats Supported: Not Supported 00:10:13.421 Flexible Data Placement Supported: Not Supported 00:10:13.421 00:10:13.421 Controller Memory Buffer Support 00:10:13.421 ================================ 00:10:13.421 Supported: No 00:10:13.421 00:10:13.421 Persistent Memory Region Support 00:10:13.421 ================================ 00:10:13.421 Supported: No 00:10:13.421 00:10:13.421 Admin Command Set Attributes 00:10:13.421 ============================ 00:10:13.421 Security Send/Receive: Not Supported 00:10:13.421 Format NVM: Not Supported 00:10:13.421 Firmware Activate/Download: Not Supported 00:10:13.421 Namespace Management: Not Supported 00:10:13.421 Device Self-Test: Not Supported 00:10:13.421 Directives: Not Supported 00:10:13.421 NVMe-MI: Not Supported 00:10:13.421 Virtualization Management: Not Supported 00:10:13.421 Doorbell Buffer Config: Not Supported 00:10:13.421 Get LBA Status Capability: Not Supported 00:10:13.421 Command & Feature Lockdown Capability: Not Supported 00:10:13.421 Abort Command Limit: 4 00:10:13.421 Async Event Request Limit: 4 00:10:13.421 Number of Firmware Slots: N/A 00:10:13.421 Firmware Slot 1 Read-Only: N/A 00:10:13.421 Firmware Activation Without Reset: N/A 00:10:13.421 Multiple Update Detection Support: N/A 00:10:13.421 Firmware Update Granularity: No Information Provided 00:10:13.421 Per-Namespace SMART Log: No 00:10:13.421 Asymmetric Namespace Access Log Page: Not Supported 00:10:13.421 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:13.421 Command Effects Log Page: Supported 00:10:13.421 Get Log Page Extended Data: Supported 00:10:13.421 Telemetry Log Pages: Not Supported 00:10:13.421 Persistent Event Log Pages: Not Supported 00:10:13.421 Supported Log Pages Log Page: May Support 00:10:13.421 Commands Supported & Effects Log Page: Not Supported 00:10:13.421 Feature Identifiers & Effects Log Page:May Support 00:10:13.421 NVMe-MI Commands & Effects Log Page: May Support 00:10:13.421 Data Area 4 for Telemetry Log: Not Supported 00:10:13.421 Error Log Page Entries Supported: 128 00:10:13.421 Keep Alive: Supported 00:10:13.421 Keep Alive Granularity: 10000 ms 00:10:13.421 00:10:13.421 NVM Command Set Attributes 00:10:13.421 ========================== 00:10:13.421 Submission Queue Entry Size 00:10:13.421 Max: 64 00:10:13.421 Min: 64 00:10:13.421 Completion Queue Entry Size 00:10:13.421 Max: 16 00:10:13.421 Min: 16 00:10:13.421 Number of Namespaces: 32 00:10:13.421 Compare Command: Supported 00:10:13.421 Write Uncorrectable Command: Not Supported 00:10:13.421 Dataset Management Command: Supported 00:10:13.421 Write Zeroes Command: Supported 00:10:13.421 Set Features Save Field: Not Supported 00:10:13.421 Reservations: Not Supported 00:10:13.421 Timestamp: Not Supported 00:10:13.421 Copy: Supported 00:10:13.421 Volatile Write Cache: Present 00:10:13.421 Atomic Write Unit (Normal): 1 00:10:13.421 Atomic Write Unit (PFail): 1 00:10:13.421 Atomic Compare & Write Unit: 1 00:10:13.421 Fused Compare & Write: Supported 00:10:13.421 Scatter-Gather List 00:10:13.421 SGL Command Set: Supported (Dword aligned) 00:10:13.421 SGL Keyed: Not Supported 00:10:13.421 SGL Bit Bucket Descriptor: Not Supported 00:10:13.421 SGL Metadata Pointer: Not Supported 00:10:13.421 Oversized SGL: Not Supported 00:10:13.421 SGL Metadata Address: Not Supported 00:10:13.421 SGL Offset: Not Supported 00:10:13.421 Transport SGL Data Block: Not Supported 00:10:13.421 Replay Protected Memory Block: Not Supported 00:10:13.421 00:10:13.421 Firmware Slot Information 00:10:13.421 ========================= 00:10:13.421 Active slot: 1 00:10:13.421 Slot 1 Firmware Revision: 24.05 00:10:13.421 00:10:13.421 00:10:13.421 Commands Supported and Effects 00:10:13.421 ============================== 00:10:13.421 Admin Commands 00:10:13.421 -------------- 00:10:13.421 Get Log Page (02h): Supported 00:10:13.421 Identify (06h): Supported 00:10:13.421 Abort (08h): Supported 00:10:13.421 Set Features (09h): Supported 00:10:13.421 Get Features (0Ah): Supported 00:10:13.421 Asynchronous Event Request (0Ch): Supported 00:10:13.421 Keep Alive (18h): Supported 00:10:13.421 I/O Commands 00:10:13.421 ------------ 00:10:13.421 Flush (00h): Supported LBA-Change 00:10:13.421 Write (01h): Supported LBA-Change 00:10:13.421 Read (02h): Supported 00:10:13.421 Compare (05h): Supported 00:10:13.421 Write Zeroes (08h): Supported LBA-Change 00:10:13.421 Dataset Management (09h): Supported LBA-Change 00:10:13.421 Copy (19h): Supported LBA-Change 00:10:13.421 Unknown (79h): Supported LBA-Change 00:10:13.421 Unknown (7Ah): Supported 00:10:13.421 00:10:13.421 Error Log 00:10:13.421 ========= 00:10:13.421 00:10:13.421 Arbitration 00:10:13.421 =========== 00:10:13.421 Arbitration Burst: 1 00:10:13.421 00:10:13.421 Power Management 00:10:13.421 ================ 00:10:13.421 Number of Power States: 1 00:10:13.421 Current Power State: Power State #0 00:10:13.421 Power State #0: 00:10:13.421 Max Power: 0.00 W 00:10:13.421 Non-Operational State: Operational 00:10:13.421 Entry Latency: Not Reported 00:10:13.421 Exit Latency: Not Reported 00:10:13.421 Relative Read Throughput: 0 00:10:13.421 Relative Read Latency: 0 00:10:13.421 Relative Write Throughput: 0 00:10:13.421 Relative Write Latency: 0 00:10:13.421 Idle Power: Not Reported 00:10:13.421 Active Power: Not Reported 00:10:13.421 Non-Operational Permissive Mode: Not Supported 00:10:13.421 00:10:13.421 Health Information 00:10:13.421 ================== 00:10:13.421 Critical Warnings: 00:10:13.421 Available Spare Space: OK 00:10:13.421 Temperature: OK 00:10:13.421 Device Reliability: OK 00:10:13.421 Read Only: No 00:10:13.421 Volatile Memory Backup: OK 00:10:13.421 Current Temperature: 0 Kelvin (-2[2024-04-24 19:40:54.849623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:13.421 [2024-04-24 19:40:54.849648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:13.421 [2024-04-24 19:40:54.849702] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:13.421 [2024-04-24 19:40:54.849720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.421 [2024-04-24 19:40:54.849731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.421 [2024-04-24 19:40:54.849741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.421 [2024-04-24 19:40:54.849750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.421 [2024-04-24 19:40:54.850233] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:13.421 [2024-04-24 19:40:54.850258] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:13.421 [2024-04-24 19:40:54.851236] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:13.421 [2024-04-24 19:40:54.851306] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:13.421 [2024-04-24 19:40:54.851321] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:13.421 [2024-04-24 19:40:54.852246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:13.421 [2024-04-24 19:40:54.852269] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:13.421 [2024-04-24 19:40:54.852326] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:13.421 [2024-04-24 19:40:54.855641] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:13.421 73 Celsius) 00:10:13.421 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:13.421 Available Spare: 0% 00:10:13.421 Available Spare Threshold: 0% 00:10:13.421 Life Percentage Used: 0% 00:10:13.421 Data Units Read: 0 00:10:13.421 Data Units Written: 0 00:10:13.421 Host Read Commands: 0 00:10:13.421 Host Write Commands: 0 00:10:13.421 Controller Busy Time: 0 minutes 00:10:13.421 Power Cycles: 0 00:10:13.421 Power On Hours: 0 hours 00:10:13.421 Unsafe Shutdowns: 0 00:10:13.421 Unrecoverable Media Errors: 0 00:10:13.421 Lifetime Error Log Entries: 0 00:10:13.421 Warning Temperature Time: 0 minutes 00:10:13.421 Critical Temperature Time: 0 minutes 00:10:13.421 00:10:13.421 Number of Queues 00:10:13.421 ================ 00:10:13.422 Number of I/O Submission Queues: 127 00:10:13.422 Number of I/O Completion Queues: 127 00:10:13.422 00:10:13.422 Active Namespaces 00:10:13.422 ================= 00:10:13.422 Namespace ID:1 00:10:13.422 Error Recovery Timeout: Unlimited 00:10:13.422 Command Set Identifier: NVM (00h) 00:10:13.422 Deallocate: Supported 00:10:13.422 Deallocated/Unwritten Error: Not Supported 00:10:13.422 Deallocated Read Value: Unknown 00:10:13.422 Deallocate in Write Zeroes: Not Supported 00:10:13.422 Deallocated Guard Field: 0xFFFF 00:10:13.422 Flush: Supported 00:10:13.422 Reservation: Supported 00:10:13.422 Namespace Sharing Capabilities: Multiple Controllers 00:10:13.422 Size (in LBAs): 131072 (0GiB) 00:10:13.422 Capacity (in LBAs): 131072 (0GiB) 00:10:13.422 Utilization (in LBAs): 131072 (0GiB) 00:10:13.422 NGUID: 291B8DD17AAC4E7B93E7FB132B96C50D 00:10:13.422 UUID: 291b8dd1-7aac-4e7b-93e7-fb132b96c50d 00:10:13.422 Thin Provisioning: Not Supported 00:10:13.422 Per-NS Atomic Units: Yes 00:10:13.422 Atomic Boundary Size (Normal): 0 00:10:13.422 Atomic Boundary Size (PFail): 0 00:10:13.422 Atomic Boundary Offset: 0 00:10:13.422 Maximum Single Source Range Length: 65535 00:10:13.422 Maximum Copy Length: 65535 00:10:13.422 Maximum Source Range Count: 1 00:10:13.422 NGUID/EUI64 Never Reused: No 00:10:13.422 Namespace Write Protected: No 00:10:13.422 Number of LBA Formats: 1 00:10:13.422 Current LBA Format: LBA Format #00 00:10:13.422 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:13.422 00:10:13.422 19:40:54 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:13.422 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.681 [2024-04-24 19:40:55.084536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:18.956 [2024-04-24 19:41:00.107559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:18.956 Initializing NVMe Controllers 00:10:18.956 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:18.956 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:18.956 Initialization complete. Launching workers. 00:10:18.956 ======================================================== 00:10:18.956 Latency(us) 00:10:18.956 Device Information : IOPS MiB/s Average min max 00:10:18.956 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34483.91 134.70 3710.85 1210.40 7630.48 00:10:18.956 ======================================================== 00:10:18.956 Total : 34483.91 134.70 3710.85 1210.40 7630.48 00:10:18.956 00:10:18.956 19:41:00 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:18.956 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.956 [2024-04-24 19:41:00.349719] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:24.227 [2024-04-24 19:41:05.387853] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:24.227 Initializing NVMe Controllers 00:10:24.228 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:24.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:24.228 Initialization complete. Launching workers. 00:10:24.228 ======================================================== 00:10:24.228 Latency(us) 00:10:24.228 Device Information : IOPS MiB/s Average min max 00:10:24.228 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.82 4990.10 11996.16 00:10:24.228 ======================================================== 00:10:24.228 Total : 16051.20 62.70 7982.82 4990.10 11996.16 00:10:24.228 00:10:24.228 19:41:05 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:24.228 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.228 [2024-04-24 19:41:05.602958] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:29.533 [2024-04-24 19:41:10.677019] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:29.533 Initializing NVMe Controllers 00:10:29.533 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:29.533 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:29.533 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:29.533 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:29.533 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:29.533 Initialization complete. Launching workers. 00:10:29.533 Starting thread on core 2 00:10:29.533 Starting thread on core 3 00:10:29.533 Starting thread on core 1 00:10:29.533 19:41:10 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:29.533 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.533 [2024-04-24 19:41:10.985089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:32.833 [2024-04-24 19:41:14.035556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:32.833 Initializing NVMe Controllers 00:10:32.833 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:32.833 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:32.833 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:32.833 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:32.833 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:32.833 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:32.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:32.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:32.833 Initialization complete. Launching workers. 00:10:32.833 Starting thread on core 1 with urgent priority queue 00:10:32.833 Starting thread on core 2 with urgent priority queue 00:10:32.833 Starting thread on core 3 with urgent priority queue 00:10:32.833 Starting thread on core 0 with urgent priority queue 00:10:32.833 SPDK bdev Controller (SPDK1 ) core 0: 5356.33 IO/s 18.67 secs/100000 ios 00:10:32.833 SPDK bdev Controller (SPDK1 ) core 1: 5161.33 IO/s 19.37 secs/100000 ios 00:10:32.833 SPDK bdev Controller (SPDK1 ) core 2: 5124.67 IO/s 19.51 secs/100000 ios 00:10:32.833 SPDK bdev Controller (SPDK1 ) core 3: 4989.00 IO/s 20.04 secs/100000 ios 00:10:32.833 ======================================================== 00:10:32.833 00:10:32.833 19:41:14 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:32.833 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.833 [2024-04-24 19:41:14.327112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:33.092 [2024-04-24 19:41:14.360680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:33.092 Initializing NVMe Controllers 00:10:33.092 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:33.092 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:33.092 Namespace ID: 1 size: 0GB 00:10:33.092 Initialization complete. 00:10:33.092 INFO: using host memory buffer for IO 00:10:33.092 Hello world! 00:10:33.092 19:41:14 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:33.092 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.352 [2024-04-24 19:41:14.651036] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:34.288 Initializing NVMe Controllers 00:10:34.288 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:34.288 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:34.288 Initialization complete. Launching workers. 00:10:34.288 submit (in ns) avg, min, max = 6834.1, 3480.0, 4014922.2 00:10:34.288 complete (in ns) avg, min, max = 27574.0, 2050.0, 4076890.0 00:10:34.288 00:10:34.288 Submit histogram 00:10:34.288 ================ 00:10:34.288 Range in us Cumulative Count 00:10:34.288 3.461 - 3.484: 0.0222% ( 3) 00:10:34.288 3.484 - 3.508: 0.4285% ( 55) 00:10:34.288 3.508 - 3.532: 1.6845% ( 170) 00:10:34.288 3.532 - 3.556: 4.5512% ( 388) 00:10:34.288 3.556 - 3.579: 10.3288% ( 782) 00:10:34.288 3.579 - 3.603: 18.2194% ( 1068) 00:10:34.288 3.603 - 3.627: 27.5139% ( 1258) 00:10:34.288 3.627 - 3.650: 38.1160% ( 1435) 00:10:34.288 3.650 - 3.674: 46.4647% ( 1130) 00:10:34.288 3.674 - 3.698: 53.8825% ( 1004) 00:10:34.288 3.698 - 3.721: 60.1478% ( 848) 00:10:34.288 3.721 - 3.745: 64.8319% ( 634) 00:10:34.288 3.745 - 3.769: 68.5113% ( 498) 00:10:34.288 3.769 - 3.793: 71.9690% ( 468) 00:10:34.288 3.793 - 3.816: 75.0277% ( 414) 00:10:34.288 3.816 - 3.840: 78.2785% ( 440) 00:10:34.288 3.840 - 3.864: 81.9653% ( 499) 00:10:34.288 3.864 - 3.887: 84.6103% ( 358) 00:10:34.288 3.887 - 3.911: 87.1740% ( 347) 00:10:34.288 3.911 - 3.935: 89.2649% ( 283) 00:10:34.288 3.935 - 3.959: 90.6686% ( 190) 00:10:34.288 3.959 - 3.982: 92.2571% ( 215) 00:10:34.288 3.982 - 4.006: 93.4466% ( 161) 00:10:34.288 4.006 - 4.030: 94.2150% ( 104) 00:10:34.288 4.030 - 4.053: 94.9169% ( 95) 00:10:34.288 4.053 - 4.077: 95.5375% ( 84) 00:10:34.288 4.077 - 4.101: 96.0251% ( 66) 00:10:34.288 4.101 - 4.124: 96.4389% ( 56) 00:10:34.288 4.124 - 4.148: 96.6605% ( 30) 00:10:34.288 4.148 - 4.172: 96.7787% ( 16) 00:10:34.288 4.172 - 4.196: 96.8822% ( 14) 00:10:34.288 4.196 - 4.219: 96.9930% ( 15) 00:10:34.288 4.219 - 4.243: 97.1038% ( 15) 00:10:34.288 4.243 - 4.267: 97.2442% ( 19) 00:10:34.288 4.267 - 4.290: 97.3107% ( 9) 00:10:34.288 4.290 - 4.314: 97.4141% ( 14) 00:10:34.288 4.314 - 4.338: 97.4584% ( 6) 00:10:34.288 4.338 - 4.361: 97.5249% ( 9) 00:10:34.288 4.361 - 4.385: 97.5767% ( 7) 00:10:34.288 4.385 - 4.409: 97.6136% ( 5) 00:10:34.288 4.409 - 4.433: 97.6358% ( 3) 00:10:34.288 4.433 - 4.456: 97.6505% ( 2) 00:10:34.288 4.456 - 4.480: 97.6727% ( 3) 00:10:34.288 4.480 - 4.504: 97.6801% ( 1) 00:10:34.288 4.527 - 4.551: 97.6875% ( 1) 00:10:34.288 4.575 - 4.599: 97.6949% ( 1) 00:10:34.288 4.646 - 4.670: 97.7096% ( 2) 00:10:34.288 4.670 - 4.693: 97.7244% ( 2) 00:10:34.288 4.693 - 4.717: 97.7540% ( 4) 00:10:34.288 4.717 - 4.741: 97.7835% ( 4) 00:10:34.288 4.741 - 4.764: 97.8131% ( 4) 00:10:34.288 4.764 - 4.788: 97.8574% ( 6) 00:10:34.288 4.788 - 4.812: 97.9165% ( 8) 00:10:34.288 4.812 - 4.836: 97.9535% ( 5) 00:10:34.288 4.836 - 4.859: 98.0126% ( 8) 00:10:34.288 4.859 - 4.883: 98.0717% ( 8) 00:10:34.288 4.883 - 4.907: 98.1899% ( 16) 00:10:34.288 4.907 - 4.930: 98.2490% ( 8) 00:10:34.288 4.930 - 4.954: 98.2564% ( 1) 00:10:34.288 4.954 - 4.978: 98.3007% ( 6) 00:10:34.288 4.978 - 5.001: 98.3450% ( 6) 00:10:34.288 5.001 - 5.025: 98.3524% ( 1) 00:10:34.288 5.025 - 5.049: 98.3598% ( 1) 00:10:34.288 5.049 - 5.073: 98.3894% ( 4) 00:10:34.288 5.073 - 5.096: 98.3967% ( 1) 00:10:34.288 5.096 - 5.120: 98.4041% ( 1) 00:10:34.289 5.120 - 5.144: 98.4115% ( 1) 00:10:34.289 5.144 - 5.167: 98.4189% ( 1) 00:10:34.289 5.167 - 5.191: 98.4263% ( 1) 00:10:34.289 5.191 - 5.215: 98.4411% ( 2) 00:10:34.289 5.215 - 5.239: 98.4485% ( 1) 00:10:34.289 5.239 - 5.262: 98.4559% ( 1) 00:10:34.289 5.286 - 5.310: 98.4706% ( 2) 00:10:34.289 5.310 - 5.333: 98.4780% ( 1) 00:10:34.289 5.333 - 5.357: 98.4928% ( 2) 00:10:34.289 5.357 - 5.381: 98.5002% ( 1) 00:10:34.289 5.381 - 5.404: 98.5150% ( 2) 00:10:34.289 5.404 - 5.428: 98.5371% ( 3) 00:10:34.289 5.428 - 5.452: 98.5445% ( 1) 00:10:34.289 5.641 - 5.665: 98.5519% ( 1) 00:10:34.289 5.665 - 5.689: 98.5667% ( 2) 00:10:34.289 6.068 - 6.116: 98.5815% ( 2) 00:10:34.289 6.353 - 6.400: 98.5888% ( 1) 00:10:34.289 6.447 - 6.495: 98.5962% ( 1) 00:10:34.289 6.684 - 6.732: 98.6036% ( 1) 00:10:34.289 6.779 - 6.827: 98.6184% ( 2) 00:10:34.289 6.874 - 6.921: 98.6258% ( 1) 00:10:34.289 6.969 - 7.016: 98.6332% ( 1) 00:10:34.289 7.064 - 7.111: 98.6406% ( 1) 00:10:34.289 7.111 - 7.159: 98.6479% ( 1) 00:10:34.289 7.159 - 7.206: 98.6553% ( 1) 00:10:34.289 7.348 - 7.396: 98.6627% ( 1) 00:10:34.289 7.396 - 7.443: 98.6701% ( 1) 00:10:34.289 7.443 - 7.490: 98.6775% ( 1) 00:10:34.289 7.585 - 7.633: 98.6849% ( 1) 00:10:34.289 7.633 - 7.680: 98.6923% ( 1) 00:10:34.289 7.680 - 7.727: 98.7071% ( 2) 00:10:34.289 7.727 - 7.775: 98.7144% ( 1) 00:10:34.289 7.775 - 7.822: 98.7292% ( 2) 00:10:34.289 7.822 - 7.870: 98.7440% ( 2) 00:10:34.289 7.870 - 7.917: 98.7514% ( 1) 00:10:34.289 7.964 - 8.012: 98.7588% ( 1) 00:10:34.289 8.059 - 8.107: 98.7662% ( 1) 00:10:34.289 8.107 - 8.154: 98.7809% ( 2) 00:10:34.289 8.154 - 8.201: 98.7957% ( 2) 00:10:34.289 8.201 - 8.249: 98.8105% ( 2) 00:10:34.289 8.249 - 8.296: 98.8179% ( 1) 00:10:34.289 8.296 - 8.344: 98.8253% ( 1) 00:10:34.289 8.344 - 8.391: 98.8327% ( 1) 00:10:34.289 8.533 - 8.581: 98.8400% ( 1) 00:10:34.289 8.581 - 8.628: 98.8622% ( 3) 00:10:34.289 8.723 - 8.770: 98.8696% ( 1) 00:10:34.289 9.007 - 9.055: 98.8918% ( 3) 00:10:34.289 9.387 - 9.434: 98.9139% ( 3) 00:10:34.289 9.434 - 9.481: 98.9361% ( 3) 00:10:34.289 9.529 - 9.576: 98.9435% ( 1) 00:10:34.289 10.003 - 10.050: 98.9509% ( 1) 00:10:34.289 10.050 - 10.098: 98.9583% ( 1) 00:10:34.289 10.098 - 10.145: 98.9656% ( 1) 00:10:34.289 10.193 - 10.240: 98.9804% ( 2) 00:10:34.289 10.335 - 10.382: 98.9878% ( 1) 00:10:34.289 10.382 - 10.430: 98.9952% ( 1) 00:10:34.289 10.619 - 10.667: 99.0026% ( 1) 00:10:34.289 10.999 - 11.046: 99.0174% ( 2) 00:10:34.289 11.046 - 11.093: 99.0248% ( 1) 00:10:34.289 11.093 - 11.141: 99.0321% ( 1) 00:10:34.289 11.141 - 11.188: 99.0395% ( 1) 00:10:34.289 11.330 - 11.378: 99.0469% ( 1) 00:10:34.289 11.520 - 11.567: 99.0543% ( 1) 00:10:34.289 11.662 - 11.710: 99.0617% ( 1) 00:10:34.289 11.757 - 11.804: 99.0691% ( 1) 00:10:34.289 12.041 - 12.089: 99.0765% ( 1) 00:10:34.289 12.136 - 12.231: 99.0839% ( 1) 00:10:34.289 12.705 - 12.800: 99.0912% ( 1) 00:10:34.289 12.800 - 12.895: 99.0986% ( 1) 00:10:34.289 12.990 - 13.084: 99.1060% ( 1) 00:10:34.289 13.084 - 13.179: 99.1208% ( 2) 00:10:34.289 13.653 - 13.748: 99.1282% ( 1) 00:10:34.289 13.748 - 13.843: 99.1356% ( 1) 00:10:34.289 13.938 - 14.033: 99.1430% ( 1) 00:10:34.289 14.507 - 14.601: 99.1504% ( 1) 00:10:34.289 14.981 - 15.076: 99.1577% ( 1) 00:10:34.289 15.076 - 15.170: 99.1651% ( 1) 00:10:34.289 17.161 - 17.256: 99.1799% ( 2) 00:10:34.289 17.256 - 17.351: 99.1947% ( 2) 00:10:34.289 17.351 - 17.446: 99.2168% ( 3) 00:10:34.289 17.446 - 17.541: 99.2612% ( 6) 00:10:34.289 17.541 - 17.636: 99.2981% ( 5) 00:10:34.289 17.636 - 17.730: 99.3203% ( 3) 00:10:34.289 17.730 - 17.825: 99.3720% ( 7) 00:10:34.289 17.825 - 17.920: 99.3868% ( 2) 00:10:34.289 17.920 - 18.015: 99.4385% ( 7) 00:10:34.289 18.015 - 18.110: 99.4607% ( 3) 00:10:34.289 18.110 - 18.204: 99.5419% ( 11) 00:10:34.289 18.204 - 18.299: 99.6232% ( 11) 00:10:34.289 18.299 - 18.394: 99.6675% ( 6) 00:10:34.289 18.394 - 18.489: 99.6897% ( 3) 00:10:34.289 18.489 - 18.584: 99.7266% ( 5) 00:10:34.289 18.584 - 18.679: 99.7710% ( 6) 00:10:34.289 18.679 - 18.773: 99.8153% ( 6) 00:10:34.289 18.773 - 18.868: 99.8301% ( 2) 00:10:34.289 18.868 - 18.963: 99.8522% ( 3) 00:10:34.289 18.963 - 19.058: 99.8596% ( 1) 00:10:34.289 19.058 - 19.153: 99.8670% ( 1) 00:10:34.289 19.627 - 19.721: 99.8744% ( 1) 00:10:34.289 19.816 - 19.911: 99.8892% ( 2) 00:10:34.289 20.006 - 20.101: 99.8966% ( 1) 00:10:34.289 20.575 - 20.670: 99.9040% ( 1) 00:10:34.289 20.954 - 21.049: 99.9113% ( 1) 00:10:34.289 25.600 - 25.790: 99.9187% ( 1) 00:10:34.289 40.960 - 41.150: 99.9261% ( 1) 00:10:34.289 3980.705 - 4004.978: 99.9852% ( 8) 00:10:34.289 4004.978 - 4029.250: 100.0000% ( 2) 00:10:34.289 00:10:34.289 Complete histogram 00:10:34.289 ================== 00:10:34.289 Range in us Cumulative Count 00:10:34.289 2.039 - 2.050: 0.0369% ( 5) 00:10:34.289 2.050 - 2.062: 6.3613% ( 856) 00:10:34.289 2.062 - 2.074: 13.2989% ( 939) 00:10:34.289 2.074 - 2.086: 16.8009% ( 474) 00:10:34.289 2.086 - 2.098: 47.3365% ( 4133) 00:10:34.289 2.098 - 2.110: 59.7636% ( 1682) 00:10:34.289 2.110 - 2.121: 62.3938% ( 356) 00:10:34.289 2.121 - 2.133: 67.2922% ( 663) 00:10:34.289 2.133 - 2.145: 68.9989% ( 231) 00:10:34.289 2.145 - 2.157: 72.1241% ( 423) 00:10:34.289 2.157 - 2.169: 84.7580% ( 1710) 00:10:34.289 2.169 - 2.181: 88.7034% ( 534) 00:10:34.289 2.181 - 2.193: 89.8412% ( 154) 00:10:34.289 2.193 - 2.204: 91.2154% ( 186) 00:10:34.289 2.204 - 2.216: 92.2571% ( 141) 00:10:34.289 2.216 - 2.228: 92.8703% ( 83) 00:10:34.289 2.228 - 2.240: 94.4293% ( 211) 00:10:34.289 2.240 - 2.252: 95.3897% ( 130) 00:10:34.289 2.252 - 2.264: 95.6335% ( 33) 00:10:34.289 2.264 - 2.276: 95.8256% ( 26) 00:10:34.289 2.276 - 2.287: 95.9512% ( 17) 00:10:34.289 2.287 - 2.299: 95.9808% ( 4) 00:10:34.289 2.299 - 2.311: 96.1138% ( 18) 00:10:34.289 2.311 - 2.323: 96.2394% ( 17) 00:10:34.289 2.323 - 2.335: 96.3428% ( 14) 00:10:34.289 2.335 - 2.347: 96.4684% ( 17) 00:10:34.289 2.347 - 2.359: 96.6605% ( 26) 00:10:34.289 2.359 - 2.370: 96.9856% ( 44) 00:10:34.289 2.370 - 2.382: 97.2516% ( 36) 00:10:34.289 2.382 - 2.394: 97.6062% ( 48) 00:10:34.289 2.394 - 2.406: 97.8352% ( 31) 00:10:34.289 2.406 - 2.418: 97.9535% ( 16) 00:10:34.289 2.418 - 2.430: 98.1086% ( 21) 00:10:34.289 2.430 - 2.441: 98.1603% ( 7) 00:10:34.289 2.441 - 2.453: 98.2047% ( 6) 00:10:34.289 2.453 - 2.465: 98.2564% ( 7) 00:10:34.289 2.465 - 2.477: 98.2785% ( 3) 00:10:34.289 2.477 - 2.489: 98.3229% ( 6) 00:10:34.289 2.489 - 2.501: 98.3376% ( 2) 00:10:34.289 2.501 - 2.513: 98.3746% ( 5) 00:10:34.289 2.513 - 2.524: 98.3967% ( 3) 00:10:34.289 2.524 - 2.536: 98.4115% ( 2) 00:10:34.289 2.536 - 2.548: 98.4337% ( 3) 00:10:34.289 2.548 - 2.560: 98.4559% ( 3) 00:10:34.289 2.584 - 2.596: 98.4706% ( 2) 00:10:34.289 2.596 - 2.607: 98.4854% ( 2) 00:10:34.289 2.607 - 2.619: 98.4928% ( 1) 00:10:34.289 2.619 - 2.631: 98.5076% ( 2) 00:10:34.289 2.643 - 2.655: 98.5150% ( 1) 00:10:34.289 2.679 - 2.690: 98.5223% ( 1) 00:10:34.289 2.868 - 2.880: 98.5297% ( 1) 00:10:34.289 3.342 - 3.366: 98.5371% ( 1) 00:10:34.289 3.366 - 3.390: 98.5445% ( 1) 00:10:34.289 3.461 - 3.484: 98.5519% ( 1) 00:10:34.289 3.532 - 3.556: 98.5815% ( 4) 00:10:34.289 3.556 - 3.579: 98.6110% ( 4) 00:10:34.289 3.603 - 3.627: 98.6184% ( 1) 00:10:34.289 3.627 - 3.650: 98.6258% ( 1) 00:10:34.289 3.650 - 3.674: 98.6479% ( 3) 00:10:34.289 3.698 - 3.721: 98.6701% ( 3) 00:10:34.289 3.745 - 3.769: 98.6775% ( 1) 00:10:34.289 3.769 - 3.793: 98.6997% ( 3) 00:10:34.289 3.840 - 3.864: 98.7071% ( 1) 00:10:34.289 4.006 - 4.030: 98.7144% ( 1) 00:10:34.289 4.314 - 4.338: 98.7218% ( 1) 00:10:34.289 5.428 - 5.452: 98.7292% ( 1) 00:10:34.289 5.452 - 5.476: 98.7366% ( 1) 00:10:34.289 5.523 - 5.547: 98.7440% ( 1) 00:10:34.289 5.570 - 5.594: 98.7514% ( 1) 00:10:34.289 5.760 - 5.784: 98.7588% ( 1) 00:10:34.289 5.784 - 5.807: 98.7662% ( 1) 00:10:34.289 5.997 - 6.021: 98.7736% ( 1) 00:10:34.289 6.068 - 6.116: 98.7809% ( 1) 00:10:34.289 6.116 - 6.163: 98.7883% ( 1) 00:10:34.289 6.163 - 6.210: 98.7957% ( 1) 00:10:34.289 6.258 - 6.305: 98.8105% ( 2) 00:10:34.289 6.305 - 6.353: 98.8179% ( 1) 00:10:34.289 6.353 - 6.400: 98.8253% ( 1) 00:10:34.290 6.447 - 6.495: 9[2024-04-24 19:41:15.674182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:34.290 8.8548% ( 4) 00:10:34.290 6.495 - 6.542: 98.8622% ( 1) 00:10:34.290 6.637 - 6.684: 98.8696% ( 1) 00:10:34.290 7.206 - 7.253: 98.8770% ( 1) 00:10:34.290 7.490 - 7.538: 98.8844% ( 1) 00:10:34.290 8.391 - 8.439: 98.8918% ( 1) 00:10:34.290 15.360 - 15.455: 98.8992% ( 1) 00:10:34.290 15.550 - 15.644: 98.9213% ( 3) 00:10:34.290 15.644 - 15.739: 98.9509% ( 4) 00:10:34.290 15.739 - 15.834: 98.9952% ( 6) 00:10:34.290 15.929 - 16.024: 99.0100% ( 2) 00:10:34.290 16.024 - 16.119: 99.0248% ( 2) 00:10:34.290 16.119 - 16.213: 99.0469% ( 3) 00:10:34.290 16.213 - 16.308: 99.0839% ( 5) 00:10:34.290 16.308 - 16.403: 99.1060% ( 3) 00:10:34.290 16.403 - 16.498: 99.1134% ( 1) 00:10:34.290 16.498 - 16.593: 99.1430% ( 4) 00:10:34.290 16.593 - 16.687: 99.1799% ( 5) 00:10:34.290 16.687 - 16.782: 99.2095% ( 4) 00:10:34.290 16.782 - 16.877: 99.2464% ( 5) 00:10:34.290 16.877 - 16.972: 99.2760% ( 4) 00:10:34.290 16.972 - 17.067: 99.3055% ( 4) 00:10:34.290 17.067 - 17.161: 99.3203% ( 2) 00:10:34.290 17.161 - 17.256: 99.3277% ( 1) 00:10:34.290 17.541 - 17.636: 99.3351% ( 1) 00:10:34.290 17.730 - 17.825: 99.3424% ( 1) 00:10:34.290 17.825 - 17.920: 99.3498% ( 1) 00:10:34.290 18.110 - 18.204: 99.3572% ( 1) 00:10:34.290 18.584 - 18.679: 99.3646% ( 1) 00:10:34.290 3470.981 - 3495.253: 99.3720% ( 1) 00:10:34.290 3980.705 - 4004.978: 99.9483% ( 78) 00:10:34.290 4004.978 - 4029.250: 99.9926% ( 6) 00:10:34.290 4053.523 - 4077.796: 100.0000% ( 1) 00:10:34.290 00:10:34.290 19:41:15 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:34.290 19:41:15 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:34.290 19:41:15 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:34.290 19:41:15 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:34.290 19:41:15 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:34.549 [2024-04-24 19:41:15.933442] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:10:34.549 [ 00:10:34.549 { 00:10:34.549 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:34.549 "subtype": "Discovery", 00:10:34.549 "listen_addresses": [], 00:10:34.549 "allow_any_host": true, 00:10:34.549 "hosts": [] 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:34.549 "subtype": "NVMe", 00:10:34.549 "listen_addresses": [ 00:10:34.549 { 00:10:34.549 "transport": "VFIOUSER", 00:10:34.549 "trtype": "VFIOUSER", 00:10:34.549 "adrfam": "IPv4", 00:10:34.549 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:34.549 "trsvcid": "0" 00:10:34.549 } 00:10:34.549 ], 00:10:34.549 "allow_any_host": true, 00:10:34.549 "hosts": [], 00:10:34.549 "serial_number": "SPDK1", 00:10:34.549 "model_number": "SPDK bdev Controller", 00:10:34.549 "max_namespaces": 32, 00:10:34.549 "min_cntlid": 1, 00:10:34.549 "max_cntlid": 65519, 00:10:34.549 "namespaces": [ 00:10:34.549 { 00:10:34.549 "nsid": 1, 00:10:34.549 "bdev_name": "Malloc1", 00:10:34.549 "name": "Malloc1", 00:10:34.549 "nguid": "291B8DD17AAC4E7B93E7FB132B96C50D", 00:10:34.549 "uuid": "291b8dd1-7aac-4e7b-93e7-fb132b96c50d" 00:10:34.549 } 00:10:34.549 ] 00:10:34.549 }, 00:10:34.549 { 00:10:34.549 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:34.549 "subtype": "NVMe", 00:10:34.549 "listen_addresses": [ 00:10:34.549 { 00:10:34.549 "transport": "VFIOUSER", 00:10:34.549 "trtype": "VFIOUSER", 00:10:34.549 "adrfam": "IPv4", 00:10:34.549 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:34.549 "trsvcid": "0" 00:10:34.549 } 00:10:34.549 ], 00:10:34.549 "allow_any_host": true, 00:10:34.549 "hosts": [], 00:10:34.549 "serial_number": "SPDK2", 00:10:34.549 "model_number": "SPDK bdev Controller", 00:10:34.549 "max_namespaces": 32, 00:10:34.549 "min_cntlid": 1, 00:10:34.549 "max_cntlid": 65519, 00:10:34.549 "namespaces": [ 00:10:34.549 { 00:10:34.549 "nsid": 1, 00:10:34.549 "bdev_name": "Malloc2", 00:10:34.549 "name": "Malloc2", 00:10:34.549 "nguid": "7BE2C97212C14183B838FD1C40FF75DE", 00:10:34.549 "uuid": "7be2c972-12c1-4183-b838-fd1c40ff75de" 00:10:34.549 } 00:10:34.549 ] 00:10:34.549 } 00:10:34.549 ] 00:10:34.549 19:41:15 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:34.549 19:41:15 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1650399 00:10:34.549 19:41:15 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:34.549 19:41:15 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:34.549 19:41:15 -- common/autotest_common.sh@1251 -- # local i=0 00:10:34.549 19:41:15 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:34.549 19:41:15 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:34.549 19:41:15 -- common/autotest_common.sh@1262 -- # return 0 00:10:34.549 19:41:15 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:34.549 19:41:15 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:34.549 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.808 [2024-04-24 19:41:16.106919] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:34.808 Malloc3 00:10:34.808 19:41:16 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:35.066 [2024-04-24 19:41:16.459569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:35.066 19:41:16 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:35.066 Asynchronous Event Request test 00:10:35.066 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:35.066 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:35.066 Registering asynchronous event callbacks... 00:10:35.066 Starting namespace attribute notice tests for all controllers... 00:10:35.066 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:35.066 aer_cb - Changed Namespace 00:10:35.066 Cleaning up... 00:10:35.325 [ 00:10:35.325 { 00:10:35.325 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:35.325 "subtype": "Discovery", 00:10:35.325 "listen_addresses": [], 00:10:35.325 "allow_any_host": true, 00:10:35.325 "hosts": [] 00:10:35.325 }, 00:10:35.325 { 00:10:35.325 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:35.325 "subtype": "NVMe", 00:10:35.325 "listen_addresses": [ 00:10:35.325 { 00:10:35.325 "transport": "VFIOUSER", 00:10:35.325 "trtype": "VFIOUSER", 00:10:35.325 "adrfam": "IPv4", 00:10:35.325 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:35.325 "trsvcid": "0" 00:10:35.325 } 00:10:35.325 ], 00:10:35.325 "allow_any_host": true, 00:10:35.325 "hosts": [], 00:10:35.325 "serial_number": "SPDK1", 00:10:35.325 "model_number": "SPDK bdev Controller", 00:10:35.325 "max_namespaces": 32, 00:10:35.325 "min_cntlid": 1, 00:10:35.325 "max_cntlid": 65519, 00:10:35.325 "namespaces": [ 00:10:35.325 { 00:10:35.325 "nsid": 1, 00:10:35.325 "bdev_name": "Malloc1", 00:10:35.325 "name": "Malloc1", 00:10:35.325 "nguid": "291B8DD17AAC4E7B93E7FB132B96C50D", 00:10:35.325 "uuid": "291b8dd1-7aac-4e7b-93e7-fb132b96c50d" 00:10:35.325 }, 00:10:35.325 { 00:10:35.325 "nsid": 2, 00:10:35.325 "bdev_name": "Malloc3", 00:10:35.325 "name": "Malloc3", 00:10:35.325 "nguid": "DA740D76EEC740F09E83ED62D117E7BA", 00:10:35.326 "uuid": "da740d76-eec7-40f0-9e83-ed62d117e7ba" 00:10:35.326 } 00:10:35.326 ] 00:10:35.326 }, 00:10:35.326 { 00:10:35.326 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:35.326 "subtype": "NVMe", 00:10:35.326 "listen_addresses": [ 00:10:35.326 { 00:10:35.326 "transport": "VFIOUSER", 00:10:35.326 "trtype": "VFIOUSER", 00:10:35.326 "adrfam": "IPv4", 00:10:35.326 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:35.326 "trsvcid": "0" 00:10:35.326 } 00:10:35.326 ], 00:10:35.326 "allow_any_host": true, 00:10:35.326 "hosts": [], 00:10:35.326 "serial_number": "SPDK2", 00:10:35.326 "model_number": "SPDK bdev Controller", 00:10:35.326 "max_namespaces": 32, 00:10:35.326 "min_cntlid": 1, 00:10:35.326 "max_cntlid": 65519, 00:10:35.326 "namespaces": [ 00:10:35.326 { 00:10:35.326 "nsid": 1, 00:10:35.326 "bdev_name": "Malloc2", 00:10:35.326 "name": "Malloc2", 00:10:35.326 "nguid": "7BE2C97212C14183B838FD1C40FF75DE", 00:10:35.326 "uuid": "7be2c972-12c1-4183-b838-fd1c40ff75de" 00:10:35.326 } 00:10:35.326 ] 00:10:35.326 } 00:10:35.326 ] 00:10:35.326 19:41:16 -- target/nvmf_vfio_user.sh@44 -- # wait 1650399 00:10:35.326 19:41:16 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:35.326 19:41:16 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:35.326 19:41:16 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:35.326 19:41:16 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:35.326 [2024-04-24 19:41:16.727872] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:10:35.326 [2024-04-24 19:41:16.727916] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650526 ] 00:10:35.326 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.326 [2024-04-24 19:41:16.761819] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:35.326 [2024-04-24 19:41:16.770910] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:35.326 [2024-04-24 19:41:16.770949] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7efe754c7000 00:10:35.326 [2024-04-24 19:41:16.771907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.326 [2024-04-24 19:41:16.772911] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.326 [2024-04-24 19:41:16.773929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.326 [2024-04-24 19:41:16.774942] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:35.326 [2024-04-24 19:41:16.775945] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:35.326 [2024-04-24 19:41:16.776952] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.326 [2024-04-24 19:41:16.777965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:35.326 [2024-04-24 19:41:16.778965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:35.326 [2024-04-24 19:41:16.779969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:35.326 [2024-04-24 19:41:16.779995] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7efe754bc000 00:10:35.326 [2024-04-24 19:41:16.781109] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:35.326 [2024-04-24 19:41:16.799788] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:35.326 [2024-04-24 19:41:16.799823] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:35.326 [2024-04-24 19:41:16.801942] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:35.326 [2024-04-24 19:41:16.801992] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:35.326 [2024-04-24 19:41:16.802076] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:35.326 [2024-04-24 19:41:16.802102] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:35.326 [2024-04-24 19:41:16.802112] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:35.326 [2024-04-24 19:41:16.802924] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:35.326 [2024-04-24 19:41:16.802958] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:35.326 [2024-04-24 19:41:16.802972] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:35.326 [2024-04-24 19:41:16.803938] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:35.326 [2024-04-24 19:41:16.803959] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:35.326 [2024-04-24 19:41:16.803973] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:35.326 [2024-04-24 19:41:16.804928] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:35.326 [2024-04-24 19:41:16.804964] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:35.326 [2024-04-24 19:41:16.805948] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:35.326 [2024-04-24 19:41:16.805967] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:35.326 [2024-04-24 19:41:16.805977] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:35.326 [2024-04-24 19:41:16.805993] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:35.326 [2024-04-24 19:41:16.806103] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:35.326 [2024-04-24 19:41:16.806112] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:35.326 [2024-04-24 19:41:16.806120] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:35.326 [2024-04-24 19:41:16.806953] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:35.326 [2024-04-24 19:41:16.807962] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:35.326 [2024-04-24 19:41:16.808964] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:35.326 [2024-04-24 19:41:16.809974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:35.326 [2024-04-24 19:41:16.810038] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:35.326 [2024-04-24 19:41:16.810990] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:35.326 [2024-04-24 19:41:16.811010] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:35.326 [2024-04-24 19:41:16.811020] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:35.326 [2024-04-24 19:41:16.811058] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:35.326 [2024-04-24 19:41:16.811075] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:35.326 [2024-04-24 19:41:16.811097] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:35.326 [2024-04-24 19:41:16.811107] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:35.326 [2024-04-24 19:41:16.811126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:35.326 [2024-04-24 19:41:16.817641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:35.326 [2024-04-24 19:41:16.817665] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:35.326 [2024-04-24 19:41:16.817674] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:35.326 [2024-04-24 19:41:16.817682] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:35.326 [2024-04-24 19:41:16.817689] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:35.326 [2024-04-24 19:41:16.817697] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:35.326 [2024-04-24 19:41:16.817705] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:35.326 [2024-04-24 19:41:16.817713] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:35.326 [2024-04-24 19:41:16.817727] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:35.326 [2024-04-24 19:41:16.817747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:35.326 [2024-04-24 19:41:16.825641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:35.326 [2024-04-24 19:41:16.825670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.327 [2024-04-24 19:41:16.825686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.327 [2024-04-24 19:41:16.825699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.327 [2024-04-24 19:41:16.825712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:35.327 [2024-04-24 19:41:16.825721] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:35.327 [2024-04-24 19:41:16.825737] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:35.327 [2024-04-24 19:41:16.825753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:35.327 [2024-04-24 19:41:16.833640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:35.327 [2024-04-24 19:41:16.833659] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:35.327 [2024-04-24 19:41:16.833668] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:35.327 [2024-04-24 19:41:16.833684] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:35.327 [2024-04-24 19:41:16.833696] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:35.327 [2024-04-24 19:41:16.833710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:35.588 [2024-04-24 19:41:16.840165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:35.588 [2024-04-24 19:41:16.840232] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:35.588 [2024-04-24 19:41:16.840248] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:35.588 [2024-04-24 19:41:16.840262] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:35.588 [2024-04-24 19:41:16.840271] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:35.588 [2024-04-24 19:41:16.840281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:35.588 [2024-04-24 19:41:16.848653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.848676] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:35.589 [2024-04-24 19:41:16.848692] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.848707] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.848724] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:35.589 [2024-04-24 19:41:16.848733] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:35.589 [2024-04-24 19:41:16.848743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:35.589 [2024-04-24 19:41:16.856640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.856668] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.856684] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.856697] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:35.589 [2024-04-24 19:41:16.856706] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:35.589 [2024-04-24 19:41:16.856715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:35.589 [2024-04-24 19:41:16.864642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.864663] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.864675] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.864690] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.864701] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.864709] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.864719] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:35.589 [2024-04-24 19:41:16.864726] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:35.589 [2024-04-24 19:41:16.864734] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:35.589 [2024-04-24 19:41:16.864759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:35.589 [2024-04-24 19:41:16.872638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.872664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:35.589 [2024-04-24 19:41:16.880641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.880666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:35.589 [2024-04-24 19:41:16.888636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.888663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:35.589 [2024-04-24 19:41:16.896640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.896667] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:35.589 [2024-04-24 19:41:16.896677] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:35.589 [2024-04-24 19:41:16.896683] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:35.589 [2024-04-24 19:41:16.896689] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:35.589 [2024-04-24 19:41:16.896699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:35.589 [2024-04-24 19:41:16.896711] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:35.589 [2024-04-24 19:41:16.896720] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:35.589 [2024-04-24 19:41:16.896729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:35.589 [2024-04-24 19:41:16.896740] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:35.589 [2024-04-24 19:41:16.896748] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:35.589 [2024-04-24 19:41:16.896757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:35.589 [2024-04-24 19:41:16.896770] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:35.589 [2024-04-24 19:41:16.896778] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:35.589 [2024-04-24 19:41:16.896786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:35.589 [2024-04-24 19:41:16.904638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.904666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.904682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:35.589 [2024-04-24 19:41:16.904694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:35.589 ===================================================== 00:10:35.589 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:35.589 ===================================================== 00:10:35.589 Controller Capabilities/Features 00:10:35.589 ================================ 00:10:35.589 Vendor ID: 4e58 00:10:35.589 Subsystem Vendor ID: 4e58 00:10:35.589 Serial Number: SPDK2 00:10:35.589 Model Number: SPDK bdev Controller 00:10:35.589 Firmware Version: 24.05 00:10:35.589 Recommended Arb Burst: 6 00:10:35.589 IEEE OUI Identifier: 8d 6b 50 00:10:35.589 Multi-path I/O 00:10:35.589 May have multiple subsystem ports: Yes 00:10:35.589 May have multiple controllers: Yes 00:10:35.589 Associated with SR-IOV VF: No 00:10:35.589 Max Data Transfer Size: 131072 00:10:35.589 Max Number of Namespaces: 32 00:10:35.589 Max Number of I/O Queues: 127 00:10:35.589 NVMe Specification Version (VS): 1.3 00:10:35.589 NVMe Specification Version (Identify): 1.3 00:10:35.589 Maximum Queue Entries: 256 00:10:35.589 Contiguous Queues Required: Yes 00:10:35.589 Arbitration Mechanisms Supported 00:10:35.589 Weighted Round Robin: Not Supported 00:10:35.589 Vendor Specific: Not Supported 00:10:35.589 Reset Timeout: 15000 ms 00:10:35.589 Doorbell Stride: 4 bytes 00:10:35.589 NVM Subsystem Reset: Not Supported 00:10:35.589 Command Sets Supported 00:10:35.589 NVM Command Set: Supported 00:10:35.589 Boot Partition: Not Supported 00:10:35.589 Memory Page Size Minimum: 4096 bytes 00:10:35.589 Memory Page Size Maximum: 4096 bytes 00:10:35.589 Persistent Memory Region: Not Supported 00:10:35.589 Optional Asynchronous Events Supported 00:10:35.589 Namespace Attribute Notices: Supported 00:10:35.589 Firmware Activation Notices: Not Supported 00:10:35.589 ANA Change Notices: Not Supported 00:10:35.589 PLE Aggregate Log Change Notices: Not Supported 00:10:35.589 LBA Status Info Alert Notices: Not Supported 00:10:35.589 EGE Aggregate Log Change Notices: Not Supported 00:10:35.589 Normal NVM Subsystem Shutdown event: Not Supported 00:10:35.589 Zone Descriptor Change Notices: Not Supported 00:10:35.589 Discovery Log Change Notices: Not Supported 00:10:35.589 Controller Attributes 00:10:35.589 128-bit Host Identifier: Supported 00:10:35.589 Non-Operational Permissive Mode: Not Supported 00:10:35.589 NVM Sets: Not Supported 00:10:35.589 Read Recovery Levels: Not Supported 00:10:35.589 Endurance Groups: Not Supported 00:10:35.589 Predictable Latency Mode: Not Supported 00:10:35.589 Traffic Based Keep ALive: Not Supported 00:10:35.589 Namespace Granularity: Not Supported 00:10:35.589 SQ Associations: Not Supported 00:10:35.589 UUID List: Not Supported 00:10:35.589 Multi-Domain Subsystem: Not Supported 00:10:35.589 Fixed Capacity Management: Not Supported 00:10:35.589 Variable Capacity Management: Not Supported 00:10:35.589 Delete Endurance Group: Not Supported 00:10:35.589 Delete NVM Set: Not Supported 00:10:35.589 Extended LBA Formats Supported: Not Supported 00:10:35.589 Flexible Data Placement Supported: Not Supported 00:10:35.589 00:10:35.589 Controller Memory Buffer Support 00:10:35.589 ================================ 00:10:35.589 Supported: No 00:10:35.589 00:10:35.589 Persistent Memory Region Support 00:10:35.589 ================================ 00:10:35.589 Supported: No 00:10:35.589 00:10:35.589 Admin Command Set Attributes 00:10:35.589 ============================ 00:10:35.589 Security Send/Receive: Not Supported 00:10:35.589 Format NVM: Not Supported 00:10:35.589 Firmware Activate/Download: Not Supported 00:10:35.589 Namespace Management: Not Supported 00:10:35.589 Device Self-Test: Not Supported 00:10:35.589 Directives: Not Supported 00:10:35.589 NVMe-MI: Not Supported 00:10:35.589 Virtualization Management: Not Supported 00:10:35.589 Doorbell Buffer Config: Not Supported 00:10:35.589 Get LBA Status Capability: Not Supported 00:10:35.589 Command & Feature Lockdown Capability: Not Supported 00:10:35.590 Abort Command Limit: 4 00:10:35.590 Async Event Request Limit: 4 00:10:35.590 Number of Firmware Slots: N/A 00:10:35.590 Firmware Slot 1 Read-Only: N/A 00:10:35.590 Firmware Activation Without Reset: N/A 00:10:35.590 Multiple Update Detection Support: N/A 00:10:35.590 Firmware Update Granularity: No Information Provided 00:10:35.590 Per-Namespace SMART Log: No 00:10:35.590 Asymmetric Namespace Access Log Page: Not Supported 00:10:35.590 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:35.590 Command Effects Log Page: Supported 00:10:35.590 Get Log Page Extended Data: Supported 00:10:35.590 Telemetry Log Pages: Not Supported 00:10:35.590 Persistent Event Log Pages: Not Supported 00:10:35.590 Supported Log Pages Log Page: May Support 00:10:35.590 Commands Supported & Effects Log Page: Not Supported 00:10:35.590 Feature Identifiers & Effects Log Page:May Support 00:10:35.590 NVMe-MI Commands & Effects Log Page: May Support 00:10:35.590 Data Area 4 for Telemetry Log: Not Supported 00:10:35.590 Error Log Page Entries Supported: 128 00:10:35.590 Keep Alive: Supported 00:10:35.590 Keep Alive Granularity: 10000 ms 00:10:35.590 00:10:35.590 NVM Command Set Attributes 00:10:35.590 ========================== 00:10:35.590 Submission Queue Entry Size 00:10:35.590 Max: 64 00:10:35.590 Min: 64 00:10:35.590 Completion Queue Entry Size 00:10:35.590 Max: 16 00:10:35.590 Min: 16 00:10:35.590 Number of Namespaces: 32 00:10:35.590 Compare Command: Supported 00:10:35.590 Write Uncorrectable Command: Not Supported 00:10:35.590 Dataset Management Command: Supported 00:10:35.590 Write Zeroes Command: Supported 00:10:35.590 Set Features Save Field: Not Supported 00:10:35.590 Reservations: Not Supported 00:10:35.590 Timestamp: Not Supported 00:10:35.590 Copy: Supported 00:10:35.590 Volatile Write Cache: Present 00:10:35.590 Atomic Write Unit (Normal): 1 00:10:35.590 Atomic Write Unit (PFail): 1 00:10:35.590 Atomic Compare & Write Unit: 1 00:10:35.590 Fused Compare & Write: Supported 00:10:35.590 Scatter-Gather List 00:10:35.590 SGL Command Set: Supported (Dword aligned) 00:10:35.590 SGL Keyed: Not Supported 00:10:35.590 SGL Bit Bucket Descriptor: Not Supported 00:10:35.590 SGL Metadata Pointer: Not Supported 00:10:35.590 Oversized SGL: Not Supported 00:10:35.590 SGL Metadata Address: Not Supported 00:10:35.590 SGL Offset: Not Supported 00:10:35.590 Transport SGL Data Block: Not Supported 00:10:35.590 Replay Protected Memory Block: Not Supported 00:10:35.590 00:10:35.590 Firmware Slot Information 00:10:35.590 ========================= 00:10:35.590 Active slot: 1 00:10:35.590 Slot 1 Firmware Revision: 24.05 00:10:35.590 00:10:35.590 00:10:35.590 Commands Supported and Effects 00:10:35.590 ============================== 00:10:35.590 Admin Commands 00:10:35.590 -------------- 00:10:35.590 Get Log Page (02h): Supported 00:10:35.590 Identify (06h): Supported 00:10:35.590 Abort (08h): Supported 00:10:35.590 Set Features (09h): Supported 00:10:35.590 Get Features (0Ah): Supported 00:10:35.590 Asynchronous Event Request (0Ch): Supported 00:10:35.590 Keep Alive (18h): Supported 00:10:35.590 I/O Commands 00:10:35.590 ------------ 00:10:35.590 Flush (00h): Supported LBA-Change 00:10:35.590 Write (01h): Supported LBA-Change 00:10:35.590 Read (02h): Supported 00:10:35.590 Compare (05h): Supported 00:10:35.590 Write Zeroes (08h): Supported LBA-Change 00:10:35.590 Dataset Management (09h): Supported LBA-Change 00:10:35.590 Copy (19h): Supported LBA-Change 00:10:35.590 Unknown (79h): Supported LBA-Change 00:10:35.590 Unknown (7Ah): Supported 00:10:35.590 00:10:35.590 Error Log 00:10:35.590 ========= 00:10:35.590 00:10:35.590 Arbitration 00:10:35.590 =========== 00:10:35.590 Arbitration Burst: 1 00:10:35.590 00:10:35.590 Power Management 00:10:35.590 ================ 00:10:35.590 Number of Power States: 1 00:10:35.590 Current Power State: Power State #0 00:10:35.590 Power State #0: 00:10:35.590 Max Power: 0.00 W 00:10:35.590 Non-Operational State: Operational 00:10:35.590 Entry Latency: Not Reported 00:10:35.590 Exit Latency: Not Reported 00:10:35.590 Relative Read Throughput: 0 00:10:35.590 Relative Read Latency: 0 00:10:35.590 Relative Write Throughput: 0 00:10:35.590 Relative Write Latency: 0 00:10:35.590 Idle Power: Not Reported 00:10:35.590 Active Power: Not Reported 00:10:35.590 Non-Operational Permissive Mode: Not Supported 00:10:35.590 00:10:35.590 Health Information 00:10:35.590 ================== 00:10:35.590 Critical Warnings: 00:10:35.590 Available Spare Space: OK 00:10:35.590 Temperature: OK 00:10:35.590 Device Reliability: OK 00:10:35.590 Read Only: No 00:10:35.590 Volatile Memory Backup: OK 00:10:35.590 Current Temperature: 0 Kelvin (-2[2024-04-24 19:41:16.904821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:35.590 [2024-04-24 19:41:16.912641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:35.590 [2024-04-24 19:41:16.912688] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:35.590 [2024-04-24 19:41:16.912705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.590 [2024-04-24 19:41:16.912716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.590 [2024-04-24 19:41:16.912726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.590 [2024-04-24 19:41:16.912735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:35.590 [2024-04-24 19:41:16.912811] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:35.590 [2024-04-24 19:41:16.912831] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:35.590 [2024-04-24 19:41:16.913809] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:35.590 [2024-04-24 19:41:16.913883] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:35.590 [2024-04-24 19:41:16.913898] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:35.590 [2024-04-24 19:41:16.914823] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:35.590 [2024-04-24 19:41:16.914848] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:35.590 [2024-04-24 19:41:16.914900] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:35.590 [2024-04-24 19:41:16.916094] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:35.590 73 Celsius) 00:10:35.590 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:35.590 Available Spare: 0% 00:10:35.590 Available Spare Threshold: 0% 00:10:35.590 Life Percentage Used: 0% 00:10:35.590 Data Units Read: 0 00:10:35.590 Data Units Written: 0 00:10:35.590 Host Read Commands: 0 00:10:35.590 Host Write Commands: 0 00:10:35.590 Controller Busy Time: 0 minutes 00:10:35.590 Power Cycles: 0 00:10:35.590 Power On Hours: 0 hours 00:10:35.590 Unsafe Shutdowns: 0 00:10:35.590 Unrecoverable Media Errors: 0 00:10:35.590 Lifetime Error Log Entries: 0 00:10:35.590 Warning Temperature Time: 0 minutes 00:10:35.590 Critical Temperature Time: 0 minutes 00:10:35.590 00:10:35.590 Number of Queues 00:10:35.590 ================ 00:10:35.590 Number of I/O Submission Queues: 127 00:10:35.590 Number of I/O Completion Queues: 127 00:10:35.590 00:10:35.590 Active Namespaces 00:10:35.590 ================= 00:10:35.590 Namespace ID:1 00:10:35.590 Error Recovery Timeout: Unlimited 00:10:35.590 Command Set Identifier: NVM (00h) 00:10:35.590 Deallocate: Supported 00:10:35.590 Deallocated/Unwritten Error: Not Supported 00:10:35.590 Deallocated Read Value: Unknown 00:10:35.590 Deallocate in Write Zeroes: Not Supported 00:10:35.590 Deallocated Guard Field: 0xFFFF 00:10:35.590 Flush: Supported 00:10:35.590 Reservation: Supported 00:10:35.590 Namespace Sharing Capabilities: Multiple Controllers 00:10:35.590 Size (in LBAs): 131072 (0GiB) 00:10:35.590 Capacity (in LBAs): 131072 (0GiB) 00:10:35.590 Utilization (in LBAs): 131072 (0GiB) 00:10:35.590 NGUID: 7BE2C97212C14183B838FD1C40FF75DE 00:10:35.590 UUID: 7be2c972-12c1-4183-b838-fd1c40ff75de 00:10:35.590 Thin Provisioning: Not Supported 00:10:35.590 Per-NS Atomic Units: Yes 00:10:35.590 Atomic Boundary Size (Normal): 0 00:10:35.590 Atomic Boundary Size (PFail): 0 00:10:35.590 Atomic Boundary Offset: 0 00:10:35.590 Maximum Single Source Range Length: 65535 00:10:35.590 Maximum Copy Length: 65535 00:10:35.590 Maximum Source Range Count: 1 00:10:35.590 NGUID/EUI64 Never Reused: No 00:10:35.590 Namespace Write Protected: No 00:10:35.590 Number of LBA Formats: 1 00:10:35.590 Current LBA Format: LBA Format #00 00:10:35.590 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:35.591 00:10:35.591 19:41:16 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:35.591 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.849 [2024-04-24 19:41:17.134389] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:41.128 [2024-04-24 19:41:22.239972] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:41.128 Initializing NVMe Controllers 00:10:41.128 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:41.128 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:41.128 Initialization complete. Launching workers. 00:10:41.128 ======================================================== 00:10:41.128 Latency(us) 00:10:41.129 Device Information : IOPS MiB/s Average min max 00:10:41.129 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34545.78 134.94 3703.92 1211.14 10393.21 00:10:41.129 ======================================================== 00:10:41.129 Total : 34545.78 134.94 3703.92 1211.14 10393.21 00:10:41.129 00:10:41.129 19:41:22 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:41.129 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.129 [2024-04-24 19:41:22.471577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:46.406 [2024-04-24 19:41:27.492779] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:46.406 Initializing NVMe Controllers 00:10:46.406 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:46.406 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:46.406 Initialization complete. Launching workers. 00:10:46.406 ======================================================== 00:10:46.406 Latency(us) 00:10:46.406 Device Information : IOPS MiB/s Average min max 00:10:46.406 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33532.84 130.99 3816.45 1224.03 7392.35 00:10:46.406 ======================================================== 00:10:46.406 Total : 33532.84 130.99 3816.45 1224.03 7392.35 00:10:46.407 00:10:46.407 19:41:27 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:46.407 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.407 [2024-04-24 19:41:27.696368] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:51.719 [2024-04-24 19:41:32.836773] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:51.719 Initializing NVMe Controllers 00:10:51.719 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:51.719 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:51.719 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:51.719 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:51.719 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:51.719 Initialization complete. Launching workers. 00:10:51.719 Starting thread on core 2 00:10:51.719 Starting thread on core 3 00:10:51.719 Starting thread on core 1 00:10:51.719 19:41:32 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:51.719 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.719 [2024-04-24 19:41:33.146141] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:55.011 [2024-04-24 19:41:36.216930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:55.011 Initializing NVMe Controllers 00:10:55.011 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:55.011 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:55.011 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:55.011 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:55.011 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:55.011 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:55.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:55.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:55.011 Initialization complete. Launching workers. 00:10:55.011 Starting thread on core 1 with urgent priority queue 00:10:55.011 Starting thread on core 2 with urgent priority queue 00:10:55.011 Starting thread on core 3 with urgent priority queue 00:10:55.011 Starting thread on core 0 with urgent priority queue 00:10:55.011 SPDK bdev Controller (SPDK2 ) core 0: 5056.67 IO/s 19.78 secs/100000 ios 00:10:55.011 SPDK bdev Controller (SPDK2 ) core 1: 5143.00 IO/s 19.44 secs/100000 ios 00:10:55.011 SPDK bdev Controller (SPDK2 ) core 2: 5238.67 IO/s 19.09 secs/100000 ios 00:10:55.011 SPDK bdev Controller (SPDK2 ) core 3: 5384.00 IO/s 18.57 secs/100000 ios 00:10:55.011 ======================================================== 00:10:55.011 00:10:55.011 19:41:36 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:55.011 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.011 [2024-04-24 19:41:36.509301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:55.011 [2024-04-24 19:41:36.518483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:55.269 Initializing NVMe Controllers 00:10:55.269 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:55.269 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:55.269 Namespace ID: 1 size: 0GB 00:10:55.269 Initialization complete. 00:10:55.269 INFO: using host memory buffer for IO 00:10:55.269 Hello world! 00:10:55.269 19:41:36 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:55.269 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.527 [2024-04-24 19:41:36.812988] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:56.463 Initializing NVMe Controllers 00:10:56.463 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:56.463 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:56.463 Initialization complete. Launching workers. 00:10:56.463 submit (in ns) avg, min, max = 6786.1, 3454.4, 4016890.0 00:10:56.463 complete (in ns) avg, min, max = 22659.1, 2021.1, 4015177.8 00:10:56.463 00:10:56.463 Submit histogram 00:10:56.463 ================ 00:10:56.463 Range in us Cumulative Count 00:10:56.463 3.437 - 3.461: 0.0146% ( 2) 00:10:56.463 3.461 - 3.484: 0.4757% ( 63) 00:10:56.463 3.484 - 3.508: 2.5174% ( 279) 00:10:56.463 3.508 - 3.532: 5.4153% ( 396) 00:10:56.463 3.532 - 3.556: 12.1698% ( 923) 00:10:56.463 3.556 - 3.579: 22.2613% ( 1379) 00:10:56.463 3.579 - 3.603: 32.0088% ( 1332) 00:10:56.463 3.603 - 3.627: 40.6952% ( 1187) 00:10:56.463 3.627 - 3.650: 48.2473% ( 1032) 00:10:56.463 3.650 - 3.674: 55.1482% ( 943) 00:10:56.463 3.674 - 3.698: 60.4244% ( 721) 00:10:56.463 3.698 - 3.721: 66.0373% ( 767) 00:10:56.463 3.721 - 3.745: 69.6012% ( 487) 00:10:56.463 3.745 - 3.769: 72.9967% ( 464) 00:10:56.463 3.769 - 3.793: 75.8800% ( 394) 00:10:56.463 3.793 - 3.816: 79.2975% ( 467) 00:10:56.463 3.816 - 3.840: 82.6418% ( 457) 00:10:56.463 3.840 - 3.864: 85.3202% ( 366) 00:10:56.463 3.864 - 3.887: 87.6107% ( 313) 00:10:56.463 3.887 - 3.911: 89.1475% ( 210) 00:10:56.463 3.911 - 3.935: 90.9184% ( 242) 00:10:56.463 3.935 - 3.959: 92.3381% ( 194) 00:10:56.463 3.959 - 3.982: 93.5309% ( 163) 00:10:56.463 3.982 - 4.006: 94.4823% ( 130) 00:10:56.463 4.006 - 4.030: 95.1701% ( 94) 00:10:56.463 4.030 - 4.053: 95.6751% ( 69) 00:10:56.463 4.053 - 4.077: 96.1508% ( 65) 00:10:56.463 4.077 - 4.101: 96.4142% ( 36) 00:10:56.463 4.101 - 4.124: 96.5971% ( 25) 00:10:56.463 4.124 - 4.148: 96.7362% ( 19) 00:10:56.463 4.148 - 4.172: 96.8386% ( 14) 00:10:56.463 4.172 - 4.196: 96.9557% ( 16) 00:10:56.463 4.196 - 4.219: 97.0801% ( 17) 00:10:56.463 4.219 - 4.243: 97.1899% ( 15) 00:10:56.463 4.243 - 4.267: 97.2704% ( 11) 00:10:56.463 4.267 - 4.290: 97.3655% ( 13) 00:10:56.463 4.290 - 4.314: 97.4314% ( 9) 00:10:56.463 4.314 - 4.338: 97.4753% ( 6) 00:10:56.463 4.338 - 4.361: 97.5046% ( 4) 00:10:56.463 4.361 - 4.385: 97.5412% ( 5) 00:10:56.463 4.385 - 4.409: 97.5558% ( 2) 00:10:56.463 4.409 - 4.433: 97.5778% ( 3) 00:10:56.463 4.480 - 4.504: 97.5851% ( 1) 00:10:56.463 4.504 - 4.527: 97.6143% ( 4) 00:10:56.463 4.575 - 4.599: 97.6290% ( 2) 00:10:56.463 4.599 - 4.622: 97.6363% ( 1) 00:10:56.463 4.622 - 4.646: 97.6509% ( 2) 00:10:56.463 4.646 - 4.670: 97.6729% ( 3) 00:10:56.463 4.670 - 4.693: 97.6802% ( 1) 00:10:56.463 4.693 - 4.717: 97.7534% ( 10) 00:10:56.463 4.717 - 4.741: 97.8558% ( 14) 00:10:56.463 4.741 - 4.764: 97.9217% ( 9) 00:10:56.463 4.764 - 4.788: 97.9802% ( 8) 00:10:56.463 4.788 - 4.812: 98.0095% ( 4) 00:10:56.463 4.812 - 4.836: 98.0315% ( 3) 00:10:56.463 4.836 - 4.859: 98.0900% ( 8) 00:10:56.463 4.859 - 4.883: 98.1632% ( 10) 00:10:56.463 4.883 - 4.907: 98.1778% ( 2) 00:10:56.463 4.907 - 4.930: 98.2144% ( 5) 00:10:56.463 4.930 - 4.954: 98.2730% ( 8) 00:10:56.463 4.954 - 4.978: 98.2876% ( 2) 00:10:56.463 5.001 - 5.025: 98.3169% ( 4) 00:10:56.463 5.025 - 5.049: 98.3242% ( 1) 00:10:56.463 5.049 - 5.073: 98.3608% ( 5) 00:10:56.463 5.073 - 5.096: 98.3754% ( 2) 00:10:56.463 5.096 - 5.120: 98.3900% ( 2) 00:10:56.463 5.120 - 5.144: 98.4047% ( 2) 00:10:56.463 5.144 - 5.167: 98.4340% ( 4) 00:10:56.463 5.191 - 5.215: 98.4413% ( 1) 00:10:56.463 5.215 - 5.239: 98.4559% ( 2) 00:10:56.463 5.310 - 5.333: 98.4632% ( 1) 00:10:56.463 5.333 - 5.357: 98.4705% ( 1) 00:10:56.463 5.452 - 5.476: 98.4779% ( 1) 00:10:56.463 5.523 - 5.547: 98.4925% ( 2) 00:10:56.463 5.594 - 5.618: 98.4998% ( 1) 00:10:56.463 5.641 - 5.665: 98.5071% ( 1) 00:10:56.463 5.665 - 5.689: 98.5145% ( 1) 00:10:56.463 5.760 - 5.784: 98.5218% ( 1) 00:10:56.463 6.116 - 6.163: 98.5291% ( 1) 00:10:56.463 6.163 - 6.210: 98.5364% ( 1) 00:10:56.463 6.447 - 6.495: 98.5437% ( 1) 00:10:56.463 6.637 - 6.684: 98.5510% ( 1) 00:10:56.463 6.732 - 6.779: 98.5657% ( 2) 00:10:56.463 6.779 - 6.827: 98.5730% ( 1) 00:10:56.463 6.827 - 6.874: 98.5803% ( 1) 00:10:56.463 6.921 - 6.969: 98.5950% ( 2) 00:10:56.463 7.016 - 7.064: 98.6023% ( 1) 00:10:56.463 7.064 - 7.111: 98.6169% ( 2) 00:10:56.463 7.159 - 7.206: 98.6242% ( 1) 00:10:56.463 7.206 - 7.253: 98.6315% ( 1) 00:10:56.463 7.301 - 7.348: 98.6389% ( 1) 00:10:56.463 7.348 - 7.396: 98.6681% ( 4) 00:10:56.463 7.443 - 7.490: 98.6828% ( 2) 00:10:56.463 7.490 - 7.538: 98.6974% ( 2) 00:10:56.463 7.585 - 7.633: 98.7047% ( 1) 00:10:56.463 7.633 - 7.680: 98.7194% ( 2) 00:10:56.463 7.775 - 7.822: 98.7340% ( 2) 00:10:56.463 7.870 - 7.917: 98.7633% ( 4) 00:10:56.463 8.012 - 8.059: 98.7706% ( 1) 00:10:56.463 8.059 - 8.107: 98.7779% ( 1) 00:10:56.463 8.344 - 8.391: 98.7852% ( 1) 00:10:56.463 8.391 - 8.439: 98.7925% ( 1) 00:10:56.463 8.439 - 8.486: 98.8072% ( 2) 00:10:56.463 8.628 - 8.676: 98.8145% ( 1) 00:10:56.463 8.960 - 9.007: 98.8218% ( 1) 00:10:56.463 9.007 - 9.055: 98.8291% ( 1) 00:10:56.463 9.102 - 9.150: 98.8364% ( 1) 00:10:56.463 9.244 - 9.292: 98.8438% ( 1) 00:10:56.463 9.339 - 9.387: 98.8511% ( 1) 00:10:56.463 9.434 - 9.481: 98.8584% ( 1) 00:10:56.463 9.529 - 9.576: 98.8657% ( 1) 00:10:56.463 9.624 - 9.671: 98.8730% ( 1) 00:10:56.463 9.766 - 9.813: 98.8804% ( 1) 00:10:56.463 9.908 - 9.956: 98.8877% ( 1) 00:10:56.463 10.050 - 10.098: 98.8950% ( 1) 00:10:56.463 10.193 - 10.240: 98.9096% ( 2) 00:10:56.463 10.287 - 10.335: 98.9169% ( 1) 00:10:56.463 10.335 - 10.382: 98.9243% ( 1) 00:10:56.463 10.524 - 10.572: 98.9389% ( 2) 00:10:56.463 10.999 - 11.046: 98.9462% ( 1) 00:10:56.463 11.093 - 11.141: 98.9535% ( 1) 00:10:56.463 11.188 - 11.236: 98.9608% ( 1) 00:10:56.463 11.425 - 11.473: 98.9682% ( 1) 00:10:56.463 11.757 - 11.804: 98.9828% ( 2) 00:10:56.463 11.852 - 11.899: 98.9901% ( 1) 00:10:56.463 11.899 - 11.947: 98.9974% ( 1) 00:10:56.463 11.947 - 11.994: 99.0048% ( 1) 00:10:56.463 11.994 - 12.041: 99.0121% ( 1) 00:10:56.463 12.089 - 12.136: 99.0194% ( 1) 00:10:56.463 12.326 - 12.421: 99.0340% ( 2) 00:10:56.463 12.516 - 12.610: 99.0413% ( 1) 00:10:56.463 12.800 - 12.895: 99.0487% ( 1) 00:10:56.463 13.464 - 13.559: 99.0560% ( 1) 00:10:56.463 13.653 - 13.748: 99.0633% ( 1) 00:10:56.463 13.843 - 13.938: 99.0706% ( 1) 00:10:56.463 13.938 - 14.033: 99.0853% ( 2) 00:10:56.463 14.127 - 14.222: 99.0926% ( 1) 00:10:56.463 14.317 - 14.412: 99.0999% ( 1) 00:10:56.463 14.696 - 14.791: 99.1072% ( 1) 00:10:56.463 14.791 - 14.886: 99.1218% ( 2) 00:10:56.463 16.593 - 16.687: 99.1292% ( 1) 00:10:56.463 17.067 - 17.161: 99.1365% ( 1) 00:10:56.463 17.161 - 17.256: 99.1511% ( 2) 00:10:56.463 17.256 - 17.351: 99.1658% ( 2) 00:10:56.464 17.351 - 17.446: 99.1950% ( 4) 00:10:56.464 17.446 - 17.541: 99.2023% ( 1) 00:10:56.464 17.541 - 17.636: 99.2316% ( 4) 00:10:56.464 17.636 - 17.730: 99.2682% ( 5) 00:10:56.464 17.730 - 17.825: 99.3194% ( 7) 00:10:56.464 17.825 - 17.920: 99.3999% ( 11) 00:10:56.464 17.920 - 18.015: 99.4512% ( 7) 00:10:56.464 18.015 - 18.110: 99.4877% ( 5) 00:10:56.464 18.110 - 18.204: 99.5243% ( 5) 00:10:56.464 18.204 - 18.299: 99.5682% ( 6) 00:10:56.464 18.299 - 18.394: 99.6195% ( 7) 00:10:56.464 18.394 - 18.489: 99.6561% ( 5) 00:10:56.464 18.489 - 18.584: 99.7146% ( 8) 00:10:56.464 18.584 - 18.679: 99.7439% ( 4) 00:10:56.464 18.679 - 18.773: 99.7731% ( 4) 00:10:56.464 18.773 - 18.868: 99.7951% ( 3) 00:10:56.464 18.868 - 18.963: 99.8317% ( 5) 00:10:56.464 18.963 - 19.058: 99.8463% ( 2) 00:10:56.464 19.058 - 19.153: 99.8536% ( 1) 00:10:56.464 19.153 - 19.247: 99.8610% ( 1) 00:10:56.464 19.342 - 19.437: 99.8756% ( 2) 00:10:56.464 19.532 - 19.627: 99.8829% ( 1) 00:10:56.464 20.196 - 20.290: 99.8902% ( 1) 00:10:56.464 20.480 - 20.575: 99.8975% ( 1) 00:10:56.464 20.764 - 20.859: 99.9049% ( 1) 00:10:56.464 22.471 - 22.566: 99.9122% ( 1) 00:10:56.464 28.255 - 28.444: 99.9195% ( 1) 00:10:56.464 34.513 - 34.702: 99.9268% ( 1) 00:10:56.464 3980.705 - 4004.978: 99.9707% ( 6) 00:10:56.464 4004.978 - 4029.250: 100.0000% ( 4) 00:10:56.464 00:10:56.464 Complete histogram 00:10:56.464 ================== 00:10:56.464 Range in us Cumulative Count 00:10:56.464 2.015 - 2.027: 0.2927% ( 40) 00:10:56.464 2.027 - 2.039: 3.0589% ( 378) 00:10:56.464 2.039 - 2.050: 9.0377% ( 817) 00:10:56.464 2.050 - 2.062: 22.2173% ( 1801) 00:10:56.464 2.062 - 2.074: 34.1456% ( 1630) 00:10:56.464 2.074 - 2.086: 50.9550% ( 2297) 00:10:56.464 2.086 - 2.098: 59.6414% ( 1187) 00:10:56.464 2.098 - 2.110: 62.2027% ( 350) 00:10:56.464 2.110 - 2.121: 67.8302% ( 769) 00:10:56.464 2.121 - 2.133: 72.6089% ( 653) 00:10:56.464 2.133 - 2.145: 77.0582% ( 608) 00:10:56.464 2.145 - 2.157: 86.3739% ( 1273) 00:10:56.464 2.157 - 2.169: 89.3377% ( 405) 00:10:56.464 2.169 - 2.181: 90.2452% ( 124) 00:10:56.464 2.181 - 2.193: 91.6648% ( 194) 00:10:56.464 2.193 - 2.204: 92.4259% ( 104) 00:10:56.464 2.204 - 2.216: 93.8383% ( 193) 00:10:56.464 2.216 - 2.228: 95.0677% ( 168) 00:10:56.464 2.228 - 2.240: 95.3897% ( 44) 00:10:56.464 2.240 - 2.252: 95.6751% ( 39) 00:10:56.464 2.252 - 2.264: 95.8214% ( 20) 00:10:56.464 2.264 - 2.276: 95.9312% ( 15) 00:10:56.464 2.276 - 2.287: 96.1068% ( 24) 00:10:56.464 2.287 - 2.299: 96.3117% ( 28) 00:10:56.464 2.299 - 2.311: 96.3776% ( 9) 00:10:56.464 2.311 - 2.323: 96.5020% ( 17) 00:10:56.464 2.323 - 2.335: 96.6630% ( 22) 00:10:56.464 2.335 - 2.347: 96.8825% ( 30) 00:10:56.464 2.347 - 2.359: 97.1240% ( 33) 00:10:56.464 2.359 - 2.370: 97.4899% ( 50) 00:10:56.464 2.370 - 2.382: 97.7022% ( 29) 00:10:56.464 2.382 - 2.394: 97.9363% ( 32) 00:10:56.464 2.394 - 2.406: 98.1120% ( 24) 00:10:56.464 2.406 - 2.418: 98.2071% ( 13) 00:10:56.464 2.418 - 2.430: 98.2803% ( 10) 00:10:56.464 2.430 - 2.441: 98.3388% ( 8) 00:10:56.464 2.441 - 2.453: 98.3535% ( 2) 00:10:56.464 2.453 - 2.465: 98.3754% ( 3) 00:10:56.464 2.465 - 2.477: 98.3900% ( 2) 00:10:56.464 2.477 - 2.489: 98.4120% ( 3) 00:10:56.464 2.489 - 2.501: 98.4266% ( 2) 00:10:56.464 2.501 - 2.513: 98.4340% ( 1) 00:10:56.464 2.513 - 2.524: 98.4559% ( 3) 00:10:56.464 2.524 - 2.536: 98.4779% ( 3) 00:10:56.464 2.536 - 2.548: 98.4998% ( 3) 00:10:56.464 2.548 - 2.560: 98.5145% ( 2) 00:10:56.464 2.572 - 2.584: 98.5218% ( 1) 00:10:56.464 2.619 - 2.631: 98.5291% ( 1) 00:10:56.464 2.631 - 2.643: 98.5364% ( 1) 00:10:56.464 2.667 - 2.679: 98.5437% ( 1) 00:10:56.464 2.702 - 2.714: 98.5510% ( 1) 00:10:56.464 2.714 - 2.726: 98.5657% ( 2) 00:10:56.464 2.738 - 2.750: 98.5730% ( 1) 00:10:56.464 2.750 - 2.761: 98.5803% ( 1) 00:10:56.464 2.761 - 2.773: 98.5876% ( 1) 00:10:56.464 3.034 - 3.058: 98.5950% ( 1) 00:10:56.464 3.319 - 3.342: 98.6023% ( 1) 00:10:56.464 3.366 - 3.390: 98.6096% ( 1) 00:10:56.464 3.390 - 3.413: 98.6462% ( 5) 00:10:56.464 3.413 - 3.437: 98.6608% ( 2) 00:10:56.464 3.437 - 3.461: 98.6828% ( 3) 00:10:56.464 3.461 - 3.484: 98.6901% ( 1) 00:10:56.464 3.508 - 3.532: 98.7120% ( 3) 00:10:56.464 3.532 - 3.556: 98.7340% ( 3) 00:10:56.464 3.556 - 3.579: 98.7706% ( 5) 00:10:56.464 3.603 - 3.627: 98.7779% ( 1) 00:10:56.464 3.627 - 3.650: 98.7999% ( 3) 00:10:56.464 3.650 - 3.674: 98.8072% ( 1) 00:10:56.464 3.698 - 3.721: 98.8218% ( 2) 00:10:56.464 3.721 - 3.745: 98.8364% ( 2) 00:10:56.464 3.769 - 3.793: 98.8438% ( 1) 00:10:56.464 3.816 - 3.840: 98.8511% ( 1) 00:10:56.464 3.840 - 3.864: 98.8584% ( 1) 00:10:56.464 3.935 - 3.959: 98.8657% ( 1) 00:10:56.464 3.959 - 3.982: 98.8730% ( 1) 00:10:56.464 4.124 - 4.148: 98.8804% ( 1) 00:10:56.464 4.172 - 4.196: 98.8877% ( 1) 00:10:56.464 5.381 - 5.404: 9[2024-04-24 19:41:37.914377] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:56.464 8.8950% ( 1) 00:10:56.464 5.594 - 5.618: 98.9023% ( 1) 00:10:56.464 5.902 - 5.926: 98.9096% ( 1) 00:10:56.464 6.021 - 6.044: 98.9169% ( 1) 00:10:56.464 6.044 - 6.068: 98.9243% ( 1) 00:10:56.464 6.258 - 6.305: 98.9389% ( 2) 00:10:56.464 6.305 - 6.353: 98.9535% ( 2) 00:10:56.464 6.400 - 6.447: 98.9608% ( 1) 00:10:56.464 6.969 - 7.016: 98.9682% ( 1) 00:10:56.464 7.111 - 7.159: 98.9828% ( 2) 00:10:56.464 7.253 - 7.301: 98.9901% ( 1) 00:10:56.464 7.348 - 7.396: 98.9974% ( 1) 00:10:56.464 10.761 - 10.809: 99.0048% ( 1) 00:10:56.464 15.455 - 15.550: 99.0121% ( 1) 00:10:56.464 15.550 - 15.644: 99.0194% ( 1) 00:10:56.464 15.644 - 15.739: 99.0560% ( 5) 00:10:56.464 15.739 - 15.834: 99.0926% ( 5) 00:10:56.464 15.834 - 15.929: 99.1072% ( 2) 00:10:56.464 15.929 - 16.024: 99.1365% ( 4) 00:10:56.464 16.024 - 16.119: 99.1584% ( 3) 00:10:56.464 16.119 - 16.213: 99.1804% ( 3) 00:10:56.464 16.213 - 16.308: 99.1950% ( 2) 00:10:56.464 16.308 - 16.403: 99.2316% ( 5) 00:10:56.464 16.403 - 16.498: 99.2609% ( 4) 00:10:56.464 16.498 - 16.593: 99.2828% ( 3) 00:10:56.464 16.593 - 16.687: 99.3048% ( 3) 00:10:56.464 16.687 - 16.782: 99.3341% ( 4) 00:10:56.464 16.782 - 16.877: 99.3487% ( 2) 00:10:56.464 16.877 - 16.972: 99.3780% ( 4) 00:10:56.464 16.972 - 17.067: 99.3853% ( 1) 00:10:56.464 17.067 - 17.161: 99.3926% ( 1) 00:10:56.464 17.161 - 17.256: 99.4072% ( 2) 00:10:56.464 17.256 - 17.351: 99.4219% ( 2) 00:10:56.464 17.351 - 17.446: 99.4292% ( 1) 00:10:56.464 17.446 - 17.541: 99.4365% ( 1) 00:10:56.464 17.636 - 17.730: 99.4512% ( 2) 00:10:56.464 17.920 - 18.015: 99.4585% ( 1) 00:10:56.464 18.299 - 18.394: 99.4658% ( 1) 00:10:56.464 18.584 - 18.679: 99.4731% ( 1) 00:10:56.464 35.650 - 35.840: 99.4804% ( 1) 00:10:56.464 1759.763 - 1771.899: 99.4877% ( 1) 00:10:56.464 2014.625 - 2026.761: 99.4951% ( 1) 00:10:56.464 3980.705 - 4004.978: 99.9195% ( 58) 00:10:56.464 4004.978 - 4029.250: 100.0000% ( 11) 00:10:56.464 00:10:56.464 19:41:37 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:56.464 19:41:37 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:56.464 19:41:37 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:56.464 19:41:37 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:56.464 19:41:37 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:57.032 [ 00:10:57.032 { 00:10:57.032 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:57.032 "subtype": "Discovery", 00:10:57.032 "listen_addresses": [], 00:10:57.032 "allow_any_host": true, 00:10:57.032 "hosts": [] 00:10:57.032 }, 00:10:57.032 { 00:10:57.032 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:57.032 "subtype": "NVMe", 00:10:57.032 "listen_addresses": [ 00:10:57.032 { 00:10:57.032 "transport": "VFIOUSER", 00:10:57.032 "trtype": "VFIOUSER", 00:10:57.032 "adrfam": "IPv4", 00:10:57.032 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:57.032 "trsvcid": "0" 00:10:57.032 } 00:10:57.032 ], 00:10:57.032 "allow_any_host": true, 00:10:57.032 "hosts": [], 00:10:57.032 "serial_number": "SPDK1", 00:10:57.032 "model_number": "SPDK bdev Controller", 00:10:57.032 "max_namespaces": 32, 00:10:57.032 "min_cntlid": 1, 00:10:57.032 "max_cntlid": 65519, 00:10:57.032 "namespaces": [ 00:10:57.032 { 00:10:57.032 "nsid": 1, 00:10:57.032 "bdev_name": "Malloc1", 00:10:57.032 "name": "Malloc1", 00:10:57.032 "nguid": "291B8DD17AAC4E7B93E7FB132B96C50D", 00:10:57.032 "uuid": "291b8dd1-7aac-4e7b-93e7-fb132b96c50d" 00:10:57.032 }, 00:10:57.032 { 00:10:57.032 "nsid": 2, 00:10:57.032 "bdev_name": "Malloc3", 00:10:57.032 "name": "Malloc3", 00:10:57.032 "nguid": "DA740D76EEC740F09E83ED62D117E7BA", 00:10:57.032 "uuid": "da740d76-eec7-40f0-9e83-ed62d117e7ba" 00:10:57.032 } 00:10:57.032 ] 00:10:57.032 }, 00:10:57.032 { 00:10:57.032 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:57.032 "subtype": "NVMe", 00:10:57.032 "listen_addresses": [ 00:10:57.032 { 00:10:57.032 "transport": "VFIOUSER", 00:10:57.032 "trtype": "VFIOUSER", 00:10:57.032 "adrfam": "IPv4", 00:10:57.032 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:57.032 "trsvcid": "0" 00:10:57.032 } 00:10:57.032 ], 00:10:57.032 "allow_any_host": true, 00:10:57.032 "hosts": [], 00:10:57.032 "serial_number": "SPDK2", 00:10:57.032 "model_number": "SPDK bdev Controller", 00:10:57.032 "max_namespaces": 32, 00:10:57.032 "min_cntlid": 1, 00:10:57.032 "max_cntlid": 65519, 00:10:57.032 "namespaces": [ 00:10:57.032 { 00:10:57.032 "nsid": 1, 00:10:57.032 "bdev_name": "Malloc2", 00:10:57.032 "name": "Malloc2", 00:10:57.032 "nguid": "7BE2C97212C14183B838FD1C40FF75DE", 00:10:57.032 "uuid": "7be2c972-12c1-4183-b838-fd1c40ff75de" 00:10:57.032 } 00:10:57.032 ] 00:10:57.032 } 00:10:57.032 ] 00:10:57.032 19:41:38 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:57.032 19:41:38 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1653051 00:10:57.032 19:41:38 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:57.032 19:41:38 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:57.032 19:41:38 -- common/autotest_common.sh@1251 -- # local i=0 00:10:57.032 19:41:38 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:57.032 19:41:38 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:57.032 19:41:38 -- common/autotest_common.sh@1262 -- # return 0 00:10:57.032 19:41:38 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:57.032 19:41:38 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:57.032 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.032 [2024-04-24 19:41:38.404723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:57.032 Malloc4 00:10:57.032 19:41:38 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:57.290 [2024-04-24 19:41:38.767430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:57.290 19:41:38 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:57.548 Asynchronous Event Request test 00:10:57.548 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:57.548 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:57.548 Registering asynchronous event callbacks... 00:10:57.548 Starting namespace attribute notice tests for all controllers... 00:10:57.548 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:57.548 aer_cb - Changed Namespace 00:10:57.548 Cleaning up... 00:10:57.548 [ 00:10:57.548 { 00:10:57.548 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:57.548 "subtype": "Discovery", 00:10:57.548 "listen_addresses": [], 00:10:57.548 "allow_any_host": true, 00:10:57.548 "hosts": [] 00:10:57.548 }, 00:10:57.548 { 00:10:57.548 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:57.548 "subtype": "NVMe", 00:10:57.548 "listen_addresses": [ 00:10:57.548 { 00:10:57.548 "transport": "VFIOUSER", 00:10:57.548 "trtype": "VFIOUSER", 00:10:57.548 "adrfam": "IPv4", 00:10:57.548 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:57.548 "trsvcid": "0" 00:10:57.548 } 00:10:57.548 ], 00:10:57.548 "allow_any_host": true, 00:10:57.548 "hosts": [], 00:10:57.548 "serial_number": "SPDK1", 00:10:57.548 "model_number": "SPDK bdev Controller", 00:10:57.548 "max_namespaces": 32, 00:10:57.548 "min_cntlid": 1, 00:10:57.548 "max_cntlid": 65519, 00:10:57.548 "namespaces": [ 00:10:57.549 { 00:10:57.549 "nsid": 1, 00:10:57.549 "bdev_name": "Malloc1", 00:10:57.549 "name": "Malloc1", 00:10:57.549 "nguid": "291B8DD17AAC4E7B93E7FB132B96C50D", 00:10:57.549 "uuid": "291b8dd1-7aac-4e7b-93e7-fb132b96c50d" 00:10:57.549 }, 00:10:57.549 { 00:10:57.549 "nsid": 2, 00:10:57.549 "bdev_name": "Malloc3", 00:10:57.549 "name": "Malloc3", 00:10:57.549 "nguid": "DA740D76EEC740F09E83ED62D117E7BA", 00:10:57.549 "uuid": "da740d76-eec7-40f0-9e83-ed62d117e7ba" 00:10:57.549 } 00:10:57.549 ] 00:10:57.549 }, 00:10:57.549 { 00:10:57.549 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:57.549 "subtype": "NVMe", 00:10:57.549 "listen_addresses": [ 00:10:57.549 { 00:10:57.549 "transport": "VFIOUSER", 00:10:57.549 "trtype": "VFIOUSER", 00:10:57.549 "adrfam": "IPv4", 00:10:57.549 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:57.549 "trsvcid": "0" 00:10:57.549 } 00:10:57.549 ], 00:10:57.549 "allow_any_host": true, 00:10:57.549 "hosts": [], 00:10:57.549 "serial_number": "SPDK2", 00:10:57.549 "model_number": "SPDK bdev Controller", 00:10:57.549 "max_namespaces": 32, 00:10:57.549 "min_cntlid": 1, 00:10:57.549 "max_cntlid": 65519, 00:10:57.549 "namespaces": [ 00:10:57.549 { 00:10:57.549 "nsid": 1, 00:10:57.549 "bdev_name": "Malloc2", 00:10:57.549 "name": "Malloc2", 00:10:57.549 "nguid": "7BE2C97212C14183B838FD1C40FF75DE", 00:10:57.549 "uuid": "7be2c972-12c1-4183-b838-fd1c40ff75de" 00:10:57.549 }, 00:10:57.549 { 00:10:57.549 "nsid": 2, 00:10:57.549 "bdev_name": "Malloc4", 00:10:57.549 "name": "Malloc4", 00:10:57.549 "nguid": "D24F862441034FA4BFC580F353A68E2C", 00:10:57.549 "uuid": "d24f8624-4103-4fa4-bfc5-80f353a68e2c" 00:10:57.549 } 00:10:57.549 ] 00:10:57.549 } 00:10:57.549 ] 00:10:57.549 19:41:39 -- target/nvmf_vfio_user.sh@44 -- # wait 1653051 00:10:57.549 19:41:39 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:57.549 19:41:39 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1647453 00:10:57.549 19:41:39 -- common/autotest_common.sh@936 -- # '[' -z 1647453 ']' 00:10:57.549 19:41:39 -- common/autotest_common.sh@940 -- # kill -0 1647453 00:10:57.549 19:41:39 -- common/autotest_common.sh@941 -- # uname 00:10:57.549 19:41:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:57.549 19:41:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1647453 00:10:57.549 19:41:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:57.549 19:41:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:57.809 19:41:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1647453' 00:10:57.809 killing process with pid 1647453 00:10:57.809 19:41:39 -- common/autotest_common.sh@955 -- # kill 1647453 00:10:57.809 [2024-04-24 19:41:39.063045] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:57.809 19:41:39 -- common/autotest_common.sh@960 -- # wait 1647453 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1653195 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1653195' 00:10:58.067 Process pid: 1653195 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:58.067 19:41:39 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1653195 00:10:58.067 19:41:39 -- common/autotest_common.sh@817 -- # '[' -z 1653195 ']' 00:10:58.067 19:41:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.067 19:41:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:58.067 19:41:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.067 19:41:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:58.067 19:41:39 -- common/autotest_common.sh@10 -- # set +x 00:10:58.067 [2024-04-24 19:41:39.502681] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:58.067 [2024-04-24 19:41:39.503773] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:10:58.067 [2024-04-24 19:41:39.503837] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.067 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.067 [2024-04-24 19:41:39.567691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.327 [2024-04-24 19:41:39.681833] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.327 [2024-04-24 19:41:39.681898] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.327 [2024-04-24 19:41:39.681921] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.327 [2024-04-24 19:41:39.681934] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.327 [2024-04-24 19:41:39.681946] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.327 [2024-04-24 19:41:39.682034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.327 [2024-04-24 19:41:39.682092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.327 [2024-04-24 19:41:39.682210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.327 [2024-04-24 19:41:39.682213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.327 [2024-04-24 19:41:39.790357] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:10:58.327 [2024-04-24 19:41:39.790587] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:10:58.327 [2024-04-24 19:41:39.790865] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:10:58.327 [2024-04-24 19:41:39.791561] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:58.327 [2024-04-24 19:41:39.791702] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:10:59.263 19:41:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:59.263 19:41:40 -- common/autotest_common.sh@850 -- # return 0 00:10:59.263 19:41:40 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:00.200 19:41:41 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:00.458 19:41:41 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:00.458 19:41:41 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:00.458 19:41:41 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:00.458 19:41:41 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:00.458 19:41:41 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:00.716 Malloc1 00:11:00.716 19:41:42 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:00.975 19:41:42 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:01.233 19:41:42 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:01.491 19:41:42 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:01.491 19:41:42 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:01.491 19:41:42 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:01.749 Malloc2 00:11:01.749 19:41:43 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:02.008 19:41:43 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:02.267 19:41:43 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:02.525 19:41:43 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:02.525 19:41:43 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1653195 00:11:02.525 19:41:43 -- common/autotest_common.sh@936 -- # '[' -z 1653195 ']' 00:11:02.525 19:41:43 -- common/autotest_common.sh@940 -- # kill -0 1653195 00:11:02.525 19:41:43 -- common/autotest_common.sh@941 -- # uname 00:11:02.525 19:41:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:02.525 19:41:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653195 00:11:02.525 19:41:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:02.525 19:41:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:02.525 19:41:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653195' 00:11:02.525 killing process with pid 1653195 00:11:02.525 19:41:43 -- common/autotest_common.sh@955 -- # kill 1653195 00:11:02.525 19:41:43 -- common/autotest_common.sh@960 -- # wait 1653195 00:11:02.783 [2024-04-24 19:41:44.157856] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:11:03.041 19:41:44 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:03.041 19:41:44 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:03.041 00:11:03.041 real 0m53.255s 00:11:03.041 user 3m29.627s 00:11:03.041 sys 0m4.618s 00:11:03.041 19:41:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:03.041 19:41:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.041 ************************************ 00:11:03.041 END TEST nvmf_vfio_user 00:11:03.041 ************************************ 00:11:03.041 19:41:44 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:03.041 19:41:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:03.041 19:41:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.041 19:41:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.041 ************************************ 00:11:03.041 START TEST nvmf_vfio_user_nvme_compliance 00:11:03.041 ************************************ 00:11:03.041 19:41:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:03.041 * Looking for test storage... 00:11:03.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:03.041 19:41:44 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.041 19:41:44 -- nvmf/common.sh@7 -- # uname -s 00:11:03.041 19:41:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.041 19:41:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.041 19:41:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.041 19:41:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.041 19:41:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.041 19:41:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.041 19:41:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.041 19:41:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.041 19:41:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.041 19:41:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.041 19:41:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.041 19:41:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:03.041 19:41:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.041 19:41:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.041 19:41:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.041 19:41:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.041 19:41:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.041 19:41:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.041 19:41:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.041 19:41:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.041 19:41:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.041 19:41:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.041 19:41:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.041 19:41:44 -- paths/export.sh@5 -- # export PATH 00:11:03.041 19:41:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.041 19:41:44 -- nvmf/common.sh@47 -- # : 0 00:11:03.041 19:41:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.041 19:41:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.041 19:41:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.041 19:41:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.041 19:41:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.041 19:41:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.041 19:41:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.041 19:41:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.041 19:41:44 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.041 19:41:44 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.041 19:41:44 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:03.041 19:41:44 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:03.041 19:41:44 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:03.041 19:41:44 -- compliance/compliance.sh@20 -- # nvmfpid=1653817 00:11:03.041 19:41:44 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:03.041 19:41:44 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1653817' 00:11:03.041 Process pid: 1653817 00:11:03.041 19:41:44 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:03.041 19:41:44 -- compliance/compliance.sh@24 -- # waitforlisten 1653817 00:11:03.041 19:41:44 -- common/autotest_common.sh@817 -- # '[' -z 1653817 ']' 00:11:03.041 19:41:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.041 19:41:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:03.041 19:41:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.041 19:41:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:03.041 19:41:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.041 [2024-04-24 19:41:44.540591] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:11:03.041 [2024-04-24 19:41:44.540677] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.301 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.301 [2024-04-24 19:41:44.608003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.301 [2024-04-24 19:41:44.727523] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.301 [2024-04-24 19:41:44.727593] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.301 [2024-04-24 19:41:44.727609] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.301 [2024-04-24 19:41:44.727636] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.301 [2024-04-24 19:41:44.727650] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.301 [2024-04-24 19:41:44.728058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.301 [2024-04-24 19:41:44.728112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.301 [2024-04-24 19:41:44.728130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.236 19:41:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:04.236 19:41:45 -- common/autotest_common.sh@850 -- # return 0 00:11:04.236 19:41:45 -- compliance/compliance.sh@26 -- # sleep 1 00:11:05.174 19:41:46 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:05.174 19:41:46 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:05.174 19:41:46 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:05.174 19:41:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.174 19:41:46 -- common/autotest_common.sh@10 -- # set +x 00:11:05.174 19:41:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.174 19:41:46 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:05.174 19:41:46 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:05.174 19:41:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.174 19:41:46 -- common/autotest_common.sh@10 -- # set +x 00:11:05.174 malloc0 00:11:05.174 19:41:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.174 19:41:46 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:05.174 19:41:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.174 19:41:46 -- common/autotest_common.sh@10 -- # set +x 00:11:05.174 19:41:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.174 19:41:46 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:05.174 19:41:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.174 19:41:46 -- common/autotest_common.sh@10 -- # set +x 00:11:05.174 19:41:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.174 19:41:46 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:05.174 19:41:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.174 19:41:46 -- common/autotest_common.sh@10 -- # set +x 00:11:05.174 19:41:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.174 19:41:46 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:05.174 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.174 00:11:05.174 00:11:05.174 CUnit - A unit testing framework for C - Version 2.1-3 00:11:05.174 http://cunit.sourceforge.net/ 00:11:05.174 00:11:05.174 00:11:05.174 Suite: nvme_compliance 00:11:05.432 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-24 19:41:46.712146] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.432 [2024-04-24 19:41:46.713595] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:05.432 [2024-04-24 19:41:46.713651] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:05.432 [2024-04-24 19:41:46.713665] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:05.432 [2024-04-24 19:41:46.715165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.432 passed 00:11:05.432 Test: admin_identify_ctrlr_verify_fused ...[2024-04-24 19:41:46.799767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.432 [2024-04-24 19:41:46.802785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.432 passed 00:11:05.432 Test: admin_identify_ns ...[2024-04-24 19:41:46.890179] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.692 [2024-04-24 19:41:46.950648] ctrlr.c:2668:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:05.692 [2024-04-24 19:41:46.958648] ctrlr.c:2668:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:05.692 [2024-04-24 19:41:46.979774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.692 passed 00:11:05.692 Test: admin_get_features_mandatory_features ...[2024-04-24 19:41:47.063369] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.692 [2024-04-24 19:41:47.066391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.692 passed 00:11:05.692 Test: admin_get_features_optional_features ...[2024-04-24 19:41:47.147901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.692 [2024-04-24 19:41:47.150930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.692 passed 00:11:05.953 Test: admin_set_features_number_of_queues ...[2024-04-24 19:41:47.235121] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.953 [2024-04-24 19:41:47.339732] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.953 passed 00:11:05.953 Test: admin_get_log_page_mandatory_logs ...[2024-04-24 19:41:47.421890] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:05.953 [2024-04-24 19:41:47.424914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:05.953 passed 00:11:06.243 Test: admin_get_log_page_with_lpo ...[2024-04-24 19:41:47.509495] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:06.243 [2024-04-24 19:41:47.578641] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:06.243 [2024-04-24 19:41:47.591746] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:06.243 passed 00:11:06.243 Test: fabric_property_get ...[2024-04-24 19:41:47.675364] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:06.243 [2024-04-24 19:41:47.676648] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:06.243 [2024-04-24 19:41:47.678391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:06.243 passed 00:11:06.502 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-24 19:41:47.760308] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:06.502 [2024-04-24 19:41:47.761647] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:06.502 [2024-04-24 19:41:47.763335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:06.502 passed 00:11:06.502 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-24 19:41:47.849134] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:06.502 [2024-04-24 19:41:47.932639] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:06.502 [2024-04-24 19:41:47.947651] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:06.502 [2024-04-24 19:41:47.952749] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:06.502 passed 00:11:06.762 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-24 19:41:48.036459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:06.762 [2024-04-24 19:41:48.037751] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:06.762 [2024-04-24 19:41:48.039481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:06.762 passed 00:11:06.762 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-24 19:41:48.119571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:06.762 [2024-04-24 19:41:48.196653] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:06.762 [2024-04-24 19:41:48.220636] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:06.762 [2024-04-24 19:41:48.225739] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:06.762 passed 00:11:07.022 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-24 19:41:48.309324] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:07.022 [2024-04-24 19:41:48.310580] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:07.022 [2024-04-24 19:41:48.310641] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:07.022 [2024-04-24 19:41:48.312341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:07.022 passed 00:11:07.022 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-24 19:41:48.394474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:07.022 [2024-04-24 19:41:48.490641] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:07.022 [2024-04-24 19:41:48.498642] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:07.022 [2024-04-24 19:41:48.506639] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:07.022 [2024-04-24 19:41:48.514640] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:07.282 [2024-04-24 19:41:48.546770] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:07.282 passed 00:11:07.282 Test: admin_create_io_sq_verify_pc ...[2024-04-24 19:41:48.626375] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:07.282 [2024-04-24 19:41:48.642656] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:07.282 [2024-04-24 19:41:48.660672] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:07.282 passed 00:11:07.282 Test: admin_create_io_qp_max_qps ...[2024-04-24 19:41:48.745247] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:08.662 [2024-04-24 19:41:49.861644] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:08.922 [2024-04-24 19:41:50.236263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:08.922 passed 00:11:08.922 Test: admin_create_io_sq_shared_cq ...[2024-04-24 19:41:50.322813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:09.181 [2024-04-24 19:41:50.455657] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:09.181 [2024-04-24 19:41:50.492758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:09.181 passed 00:11:09.181 00:11:09.181 Run Summary: Type Total Ran Passed Failed Inactive 00:11:09.181 suites 1 1 n/a 0 0 00:11:09.181 tests 18 18 18 0 0 00:11:09.181 asserts 360 360 360 0 n/a 00:11:09.181 00:11:09.181 Elapsed time = 1.568 seconds 00:11:09.181 19:41:50 -- compliance/compliance.sh@42 -- # killprocess 1653817 00:11:09.181 19:41:50 -- common/autotest_common.sh@936 -- # '[' -z 1653817 ']' 00:11:09.181 19:41:50 -- common/autotest_common.sh@940 -- # kill -0 1653817 00:11:09.181 19:41:50 -- common/autotest_common.sh@941 -- # uname 00:11:09.181 19:41:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:09.181 19:41:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653817 00:11:09.181 19:41:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:09.181 19:41:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:09.181 19:41:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653817' 00:11:09.181 killing process with pid 1653817 00:11:09.181 19:41:50 -- common/autotest_common.sh@955 -- # kill 1653817 00:11:09.181 19:41:50 -- common/autotest_common.sh@960 -- # wait 1653817 00:11:09.439 19:41:50 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:09.439 19:41:50 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:09.439 00:11:09.439 real 0m6.447s 00:11:09.439 user 0m18.237s 00:11:09.439 sys 0m0.620s 00:11:09.439 19:41:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:09.439 19:41:50 -- common/autotest_common.sh@10 -- # set +x 00:11:09.439 ************************************ 00:11:09.439 END TEST nvmf_vfio_user_nvme_compliance 00:11:09.439 ************************************ 00:11:09.439 19:41:50 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:09.439 19:41:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:09.439 19:41:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:09.439 19:41:50 -- common/autotest_common.sh@10 -- # set +x 00:11:09.698 ************************************ 00:11:09.698 START TEST nvmf_vfio_user_fuzz 00:11:09.698 ************************************ 00:11:09.698 19:41:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:09.698 * Looking for test storage... 00:11:09.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.698 19:41:51 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.698 19:41:51 -- nvmf/common.sh@7 -- # uname -s 00:11:09.698 19:41:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.698 19:41:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.698 19:41:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.698 19:41:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.698 19:41:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.698 19:41:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.698 19:41:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.698 19:41:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.698 19:41:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.698 19:41:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.698 19:41:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.698 19:41:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.698 19:41:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.698 19:41:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.698 19:41:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.698 19:41:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.698 19:41:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.698 19:41:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.698 19:41:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.698 19:41:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.698 19:41:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.698 19:41:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.699 19:41:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.699 19:41:51 -- paths/export.sh@5 -- # export PATH 00:11:09.699 19:41:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.699 19:41:51 -- nvmf/common.sh@47 -- # : 0 00:11:09.699 19:41:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.699 19:41:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.699 19:41:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.699 19:41:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.699 19:41:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.699 19:41:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.699 19:41:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.699 19:41:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1654679 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1654679' 00:11:09.699 Process pid: 1654679 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:09.699 19:41:51 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1654679 00:11:09.699 19:41:51 -- common/autotest_common.sh@817 -- # '[' -z 1654679 ']' 00:11:09.699 19:41:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.699 19:41:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:09.699 19:41:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.699 19:41:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:09.699 19:41:51 -- common/autotest_common.sh@10 -- # set +x 00:11:10.636 19:41:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:10.636 19:41:52 -- common/autotest_common.sh@850 -- # return 0 00:11:10.636 19:41:52 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:11.575 19:41:53 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:11.575 19:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.575 19:41:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.575 19:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.575 19:41:53 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:11.835 19:41:53 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:11.835 19:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.835 19:41:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.835 malloc0 00:11:11.835 19:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.835 19:41:53 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:11.835 19:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.835 19:41:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.835 19:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.835 19:41:53 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:11.835 19:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.835 19:41:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.835 19:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.835 19:41:53 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:11.835 19:41:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:11.835 19:41:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.835 19:41:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:11.835 19:41:53 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:11.835 19:41:53 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:43.900 Fuzzing completed. Shutting down the fuzz application 00:11:43.900 00:11:43.900 Dumping successful admin opcodes: 00:11:43.900 8, 9, 10, 24, 00:11:43.900 Dumping successful io opcodes: 00:11:43.900 0, 00:11:43.900 NS: 0x200003a1ef00 I/O qp, Total commands completed: 586457, total successful commands: 2260, random_seed: 3527323264 00:11:43.900 NS: 0x200003a1ef00 admin qp, Total commands completed: 74898, total successful commands: 585, random_seed: 635578368 00:11:43.900 19:42:24 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:43.900 19:42:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:43.900 19:42:24 -- common/autotest_common.sh@10 -- # set +x 00:11:43.900 19:42:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:43.900 19:42:24 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1654679 00:11:43.900 19:42:24 -- common/autotest_common.sh@936 -- # '[' -z 1654679 ']' 00:11:43.900 19:42:24 -- common/autotest_common.sh@940 -- # kill -0 1654679 00:11:43.900 19:42:24 -- common/autotest_common.sh@941 -- # uname 00:11:43.900 19:42:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:43.900 19:42:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1654679 00:11:43.900 19:42:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:43.900 19:42:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:43.900 19:42:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1654679' 00:11:43.900 killing process with pid 1654679 00:11:43.900 19:42:24 -- common/autotest_common.sh@955 -- # kill 1654679 00:11:43.900 19:42:24 -- common/autotest_common.sh@960 -- # wait 1654679 00:11:43.900 19:42:24 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:43.900 19:42:25 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:43.900 00:11:43.900 real 0m34.038s 00:11:43.900 user 0m34.283s 00:11:43.900 sys 0m28.763s 00:11:43.900 19:42:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:43.900 19:42:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.900 ************************************ 00:11:43.900 END TEST nvmf_vfio_user_fuzz 00:11:43.900 ************************************ 00:11:43.900 19:42:25 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:43.900 19:42:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:43.900 19:42:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.900 19:42:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.900 ************************************ 00:11:43.900 START TEST nvmf_host_management 00:11:43.900 ************************************ 00:11:43.900 19:42:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:43.900 * Looking for test storage... 00:11:43.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.900 19:42:25 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.900 19:42:25 -- nvmf/common.sh@7 -- # uname -s 00:11:43.900 19:42:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.900 19:42:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.900 19:42:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.900 19:42:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.900 19:42:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.900 19:42:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.900 19:42:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.900 19:42:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.900 19:42:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.900 19:42:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.900 19:42:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:43.900 19:42:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:43.900 19:42:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.900 19:42:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.900 19:42:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.900 19:42:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.900 19:42:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.900 19:42:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.900 19:42:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.900 19:42:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.900 19:42:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.900 19:42:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.900 19:42:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.900 19:42:25 -- paths/export.sh@5 -- # export PATH 00:11:43.901 19:42:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.901 19:42:25 -- nvmf/common.sh@47 -- # : 0 00:11:43.901 19:42:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:43.901 19:42:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:43.901 19:42:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.901 19:42:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.901 19:42:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.901 19:42:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:43.901 19:42:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:43.901 19:42:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:43.901 19:42:25 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:43.901 19:42:25 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:43.901 19:42:25 -- target/host_management.sh@105 -- # nvmftestinit 00:11:43.901 19:42:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:43.901 19:42:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.901 19:42:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:43.901 19:42:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:43.901 19:42:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:43.901 19:42:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.901 19:42:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.901 19:42:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.901 19:42:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:43.901 19:42:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:43.901 19:42:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:43.901 19:42:25 -- common/autotest_common.sh@10 -- # set +x 00:11:45.839 19:42:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:45.839 19:42:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:45.839 19:42:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:45.839 19:42:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:45.839 19:42:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:45.839 19:42:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:45.839 19:42:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:45.839 19:42:27 -- nvmf/common.sh@295 -- # net_devs=() 00:11:45.839 19:42:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:45.839 19:42:27 -- nvmf/common.sh@296 -- # e810=() 00:11:45.839 19:42:27 -- nvmf/common.sh@296 -- # local -ga e810 00:11:45.839 19:42:27 -- nvmf/common.sh@297 -- # x722=() 00:11:45.839 19:42:27 -- nvmf/common.sh@297 -- # local -ga x722 00:11:45.839 19:42:27 -- nvmf/common.sh@298 -- # mlx=() 00:11:45.839 19:42:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:45.839 19:42:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.839 19:42:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:45.839 19:42:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:45.839 19:42:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:45.839 19:42:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.839 19:42:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:45.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:45.839 19:42:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:45.839 19:42:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:45.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:45.839 19:42:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:45.839 19:42:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:45.839 19:42:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.840 19:42:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.840 19:42:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:45.840 19:42:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.840 19:42:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:45.840 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:45.840 19:42:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.840 19:42:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:45.840 19:42:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.840 19:42:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:45.840 19:42:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.840 19:42:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:45.840 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:45.840 19:42:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.840 19:42:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:45.840 19:42:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:45.840 19:42:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:45.840 19:42:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:45.840 19:42:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:45.840 19:42:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.840 19:42:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.840 19:42:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.840 19:42:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:45.840 19:42:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.840 19:42:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.840 19:42:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:45.840 19:42:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.840 19:42:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.840 19:42:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:45.840 19:42:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:45.840 19:42:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.840 19:42:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.840 19:42:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.840 19:42:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.840 19:42:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:45.840 19:42:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.840 19:42:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.840 19:42:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.840 19:42:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:45.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:11:45.840 00:11:45.840 --- 10.0.0.2 ping statistics --- 00:11:45.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.840 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:11:45.840 19:42:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:11:45.840 00:11:45.840 --- 10.0.0.1 ping statistics --- 00:11:45.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.840 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:45.840 19:42:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.840 19:42:27 -- nvmf/common.sh@411 -- # return 0 00:11:45.840 19:42:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:45.840 19:42:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.840 19:42:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:45.840 19:42:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:45.840 19:42:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.840 19:42:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:45.840 19:42:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:45.840 19:42:27 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:11:45.840 19:42:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:45.840 19:42:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.840 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:11:45.840 ************************************ 00:11:45.840 START TEST nvmf_host_management 00:11:45.840 ************************************ 00:11:45.840 19:42:27 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:11:45.840 19:42:27 -- target/host_management.sh@69 -- # starttarget 00:11:45.840 19:42:27 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:45.840 19:42:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:45.840 19:42:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:45.840 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:11:45.840 19:42:27 -- nvmf/common.sh@470 -- # nvmfpid=1661029 00:11:45.840 19:42:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:45.840 19:42:27 -- nvmf/common.sh@471 -- # waitforlisten 1661029 00:11:45.840 19:42:27 -- common/autotest_common.sh@817 -- # '[' -z 1661029 ']' 00:11:45.840 19:42:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.840 19:42:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:45.840 19:42:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.840 19:42:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:45.840 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.100 [2024-04-24 19:42:27.357123] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:11:46.100 [2024-04-24 19:42:27.357213] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.100 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.100 [2024-04-24 19:42:27.426741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.100 [2024-04-24 19:42:27.549800] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.100 [2024-04-24 19:42:27.549867] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.100 [2024-04-24 19:42:27.549883] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.100 [2024-04-24 19:42:27.549897] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.100 [2024-04-24 19:42:27.549909] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.100 [2024-04-24 19:42:27.549996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.100 [2024-04-24 19:42:27.550113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.100 [2024-04-24 19:42:27.550183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.100 [2024-04-24 19:42:27.550181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:46.359 19:42:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:46.359 19:42:27 -- common/autotest_common.sh@850 -- # return 0 00:11:46.359 19:42:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:46.359 19:42:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:46.359 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 19:42:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.359 19:42:27 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.359 19:42:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.359 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 [2024-04-24 19:42:27.706177] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.359 19:42:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.359 19:42:27 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:46.359 19:42:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:46.359 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 19:42:27 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:46.359 19:42:27 -- target/host_management.sh@23 -- # cat 00:11:46.359 19:42:27 -- target/host_management.sh@30 -- # rpc_cmd 00:11:46.359 19:42:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.359 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 Malloc0 00:11:46.359 [2024-04-24 19:42:27.765153] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.359 19:42:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.359 19:42:27 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:46.359 19:42:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:46.359 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 19:42:27 -- target/host_management.sh@73 -- # perfpid=1661074 00:11:46.359 19:42:27 -- target/host_management.sh@74 -- # waitforlisten 1661074 /var/tmp/bdevperf.sock 00:11:46.359 19:42:27 -- common/autotest_common.sh@817 -- # '[' -z 1661074 ']' 00:11:46.359 19:42:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:46.359 19:42:27 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:46.359 19:42:27 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:46.359 19:42:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:46.359 19:42:27 -- nvmf/common.sh@521 -- # config=() 00:11:46.359 19:42:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:46.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:46.359 19:42:27 -- nvmf/common.sh@521 -- # local subsystem config 00:11:46.359 19:42:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:46.359 19:42:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:46.359 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 19:42:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:46.359 { 00:11:46.359 "params": { 00:11:46.359 "name": "Nvme$subsystem", 00:11:46.359 "trtype": "$TEST_TRANSPORT", 00:11:46.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:46.359 "adrfam": "ipv4", 00:11:46.359 "trsvcid": "$NVMF_PORT", 00:11:46.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:46.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:46.359 "hdgst": ${hdgst:-false}, 00:11:46.359 "ddgst": ${ddgst:-false} 00:11:46.359 }, 00:11:46.359 "method": "bdev_nvme_attach_controller" 00:11:46.359 } 00:11:46.359 EOF 00:11:46.359 )") 00:11:46.359 19:42:27 -- nvmf/common.sh@543 -- # cat 00:11:46.359 19:42:27 -- nvmf/common.sh@545 -- # jq . 00:11:46.359 19:42:27 -- nvmf/common.sh@546 -- # IFS=, 00:11:46.359 19:42:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:46.359 "params": { 00:11:46.359 "name": "Nvme0", 00:11:46.359 "trtype": "tcp", 00:11:46.359 "traddr": "10.0.0.2", 00:11:46.359 "adrfam": "ipv4", 00:11:46.359 "trsvcid": "4420", 00:11:46.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:46.359 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:46.359 "hdgst": false, 00:11:46.359 "ddgst": false 00:11:46.359 }, 00:11:46.359 "method": "bdev_nvme_attach_controller" 00:11:46.359 }' 00:11:46.359 [2024-04-24 19:42:27.835127] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:11:46.359 [2024-04-24 19:42:27.835207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661074 ] 00:11:46.359 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.617 [2024-04-24 19:42:27.897083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.617 [2024-04-24 19:42:28.007045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.876 Running I/O for 10 seconds... 00:11:46.876 19:42:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:46.876 19:42:28 -- common/autotest_common.sh@850 -- # return 0 00:11:46.876 19:42:28 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:46.876 19:42:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.876 19:42:28 -- common/autotest_common.sh@10 -- # set +x 00:11:46.876 19:42:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.876 19:42:28 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:46.876 19:42:28 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:46.876 19:42:28 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:46.876 19:42:28 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:46.876 19:42:28 -- target/host_management.sh@52 -- # local ret=1 00:11:46.876 19:42:28 -- target/host_management.sh@53 -- # local i 00:11:46.876 19:42:28 -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:46.876 19:42:28 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:46.876 19:42:28 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:46.876 19:42:28 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:46.876 19:42:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.876 19:42:28 -- common/autotest_common.sh@10 -- # set +x 00:11:46.876 19:42:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.876 19:42:28 -- target/host_management.sh@55 -- # read_io_count=8 00:11:46.876 19:42:28 -- target/host_management.sh@58 -- # '[' 8 -ge 100 ']' 00:11:46.876 19:42:28 -- target/host_management.sh@62 -- # sleep 0.25 00:11:47.136 19:42:28 -- target/host_management.sh@54 -- # (( i-- )) 00:11:47.136 19:42:28 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:47.136 19:42:28 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:47.136 19:42:28 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:47.136 19:42:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.136 19:42:28 -- common/autotest_common.sh@10 -- # set +x 00:11:47.136 19:42:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.136 19:42:28 -- target/host_management.sh@55 -- # read_io_count=387 00:11:47.136 19:42:28 -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:11:47.136 19:42:28 -- target/host_management.sh@59 -- # ret=0 00:11:47.136 19:42:28 -- target/host_management.sh@60 -- # break 00:11:47.136 19:42:28 -- target/host_management.sh@64 -- # return 0 00:11:47.136 19:42:28 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:47.136 19:42:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.136 19:42:28 -- common/autotest_common.sh@10 -- # set +x 00:11:47.136 [2024-04-24 19:42:28.576023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.136 [2024-04-24 19:42:28.576460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3ed0 is same with the state(5) to be set 00:11:47.137 [2024-04-24 19:42:28.576960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.137 [2024-04-24 19:42:28.577820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.137 [2024-04-24 19:42:28.577837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.577852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.577866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.577880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.577893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.577908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.577921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.577936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.577956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.577970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.577984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.577998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:47.138 [2024-04-24 19:42:28.578847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.138 [2024-04-24 19:42:28.578943] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16905d0 was disconnected and freed. reset controller. 00:11:47.138 [2024-04-24 19:42:28.580081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:47.138 19:42:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.138 19:42:28 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:47.138 19:42:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.138 19:42:28 -- common/autotest_common.sh@10 -- # set +x 00:11:47.138 task offset: 57344 on job bdev=Nvme0n1 fails 00:11:47.138 00:11:47.138 Latency(us) 00:11:47.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.138 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:47.138 Job: Nvme0n1 ended in about 0.39 seconds with error 00:11:47.138 Verification LBA range: start 0x0 length 0x400 00:11:47.138 Nvme0n1 : 0.39 1149.84 71.86 164.26 0.00 47362.49 4296.25 43496.49 00:11:47.138 =================================================================================================================== 00:11:47.139 Total : 1149.84 71.86 164.26 0.00 47362.49 4296.25 43496.49 00:11:47.139 [2024-04-24 19:42:28.581971] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:47.139 [2024-04-24 19:42:28.582000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1280170 (9): Bad file descriptor 00:11:47.139 19:42:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.139 19:42:28 -- target/host_management.sh@87 -- # sleep 1 00:11:47.139 [2024-04-24 19:42:28.591831] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:48.514 19:42:29 -- target/host_management.sh@91 -- # kill -9 1661074 00:11:48.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1661074) - No such process 00:11:48.514 19:42:29 -- target/host_management.sh@91 -- # true 00:11:48.514 19:42:29 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:48.514 19:42:29 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:48.514 19:42:29 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:48.514 19:42:29 -- nvmf/common.sh@521 -- # config=() 00:11:48.514 19:42:29 -- nvmf/common.sh@521 -- # local subsystem config 00:11:48.514 19:42:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:48.514 19:42:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:48.514 { 00:11:48.514 "params": { 00:11:48.514 "name": "Nvme$subsystem", 00:11:48.514 "trtype": "$TEST_TRANSPORT", 00:11:48.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.514 "adrfam": "ipv4", 00:11:48.514 "trsvcid": "$NVMF_PORT", 00:11:48.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.514 "hdgst": ${hdgst:-false}, 00:11:48.514 "ddgst": ${ddgst:-false} 00:11:48.514 }, 00:11:48.514 "method": "bdev_nvme_attach_controller" 00:11:48.514 } 00:11:48.514 EOF 00:11:48.514 )") 00:11:48.514 19:42:29 -- nvmf/common.sh@543 -- # cat 00:11:48.514 19:42:29 -- nvmf/common.sh@545 -- # jq . 00:11:48.514 19:42:29 -- nvmf/common.sh@546 -- # IFS=, 00:11:48.514 19:42:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:48.514 "params": { 00:11:48.514 "name": "Nvme0", 00:11:48.514 "trtype": "tcp", 00:11:48.514 "traddr": "10.0.0.2", 00:11:48.514 "adrfam": "ipv4", 00:11:48.514 "trsvcid": "4420", 00:11:48.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:48.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:48.514 "hdgst": false, 00:11:48.514 "ddgst": false 00:11:48.514 }, 00:11:48.514 "method": "bdev_nvme_attach_controller" 00:11:48.514 }' 00:11:48.514 [2024-04-24 19:42:29.635497] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:11:48.514 [2024-04-24 19:42:29.635571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661350 ] 00:11:48.514 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.514 [2024-04-24 19:42:29.695468] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.514 [2024-04-24 19:42:29.806921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.772 Running I/O for 1 seconds... 00:11:49.706 00:11:49.706 Latency(us) 00:11:49.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.706 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:49.706 Verification LBA range: start 0x0 length 0x400 00:11:49.706 Nvme0n1 : 1.03 1247.17 77.95 0.00 0.00 50575.00 12184.84 41554.68 00:11:49.706 =================================================================================================================== 00:11:49.706 Total : 1247.17 77.95 0.00 0.00 50575.00 12184.84 41554.68 00:11:49.965 19:42:31 -- target/host_management.sh@102 -- # stoptarget 00:11:49.965 19:42:31 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:49.965 19:42:31 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:49.965 19:42:31 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:49.965 19:42:31 -- target/host_management.sh@40 -- # nvmftestfini 00:11:49.965 19:42:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:49.965 19:42:31 -- nvmf/common.sh@117 -- # sync 00:11:49.965 19:42:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.965 19:42:31 -- nvmf/common.sh@120 -- # set +e 00:11:49.965 19:42:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.965 19:42:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.965 rmmod nvme_tcp 00:11:49.965 rmmod nvme_fabrics 00:11:49.965 rmmod nvme_keyring 00:11:49.965 19:42:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.965 19:42:31 -- nvmf/common.sh@124 -- # set -e 00:11:49.965 19:42:31 -- nvmf/common.sh@125 -- # return 0 00:11:49.965 19:42:31 -- nvmf/common.sh@478 -- # '[' -n 1661029 ']' 00:11:49.965 19:42:31 -- nvmf/common.sh@479 -- # killprocess 1661029 00:11:49.965 19:42:31 -- common/autotest_common.sh@936 -- # '[' -z 1661029 ']' 00:11:49.965 19:42:31 -- common/autotest_common.sh@940 -- # kill -0 1661029 00:11:49.965 19:42:31 -- common/autotest_common.sh@941 -- # uname 00:11:50.223 19:42:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:50.223 19:42:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1661029 00:11:50.223 19:42:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:50.223 19:42:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:50.223 19:42:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1661029' 00:11:50.223 killing process with pid 1661029 00:11:50.223 19:42:31 -- common/autotest_common.sh@955 -- # kill 1661029 00:11:50.223 19:42:31 -- common/autotest_common.sh@960 -- # wait 1661029 00:11:50.482 [2024-04-24 19:42:31.772547] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:50.482 19:42:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:50.482 19:42:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:50.482 19:42:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:50.482 19:42:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.482 19:42:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.482 19:42:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.482 19:42:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.482 19:42:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.405 19:42:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:52.405 00:11:52.405 real 0m6.527s 00:11:52.405 user 0m19.109s 00:11:52.405 sys 0m1.160s 00:11:52.405 19:42:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:52.405 19:42:33 -- common/autotest_common.sh@10 -- # set +x 00:11:52.405 ************************************ 00:11:52.405 END TEST nvmf_host_management 00:11:52.406 ************************************ 00:11:52.406 19:42:33 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:52.406 00:11:52.406 real 0m8.715s 00:11:52.406 user 0m19.838s 00:11:52.406 sys 0m2.625s 00:11:52.406 19:42:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:52.406 19:42:33 -- common/autotest_common.sh@10 -- # set +x 00:11:52.406 ************************************ 00:11:52.406 END TEST nvmf_host_management 00:11:52.406 ************************************ 00:11:52.406 19:42:33 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:52.406 19:42:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:52.406 19:42:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.406 19:42:33 -- common/autotest_common.sh@10 -- # set +x 00:11:52.664 ************************************ 00:11:52.664 START TEST nvmf_lvol 00:11:52.664 ************************************ 00:11:52.664 19:42:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:52.664 * Looking for test storage... 00:11:52.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.664 19:42:34 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.664 19:42:34 -- nvmf/common.sh@7 -- # uname -s 00:11:52.664 19:42:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.664 19:42:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.664 19:42:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.664 19:42:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.664 19:42:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.664 19:42:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.664 19:42:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.664 19:42:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.664 19:42:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.664 19:42:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.664 19:42:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:52.664 19:42:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:52.664 19:42:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.664 19:42:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.664 19:42:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.664 19:42:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.664 19:42:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.664 19:42:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.664 19:42:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.664 19:42:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.664 19:42:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.665 19:42:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.665 19:42:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.665 19:42:34 -- paths/export.sh@5 -- # export PATH 00:11:52.665 19:42:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.665 19:42:34 -- nvmf/common.sh@47 -- # : 0 00:11:52.665 19:42:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.665 19:42:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.665 19:42:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.665 19:42:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.665 19:42:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.665 19:42:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.665 19:42:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.665 19:42:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.665 19:42:34 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.665 19:42:34 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.665 19:42:34 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:52.665 19:42:34 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:52.665 19:42:34 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:52.665 19:42:34 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:52.665 19:42:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:52.665 19:42:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.665 19:42:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:52.665 19:42:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:52.665 19:42:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:52.665 19:42:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.665 19:42:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.665 19:42:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.665 19:42:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:52.665 19:42:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:52.665 19:42:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:52.665 19:42:34 -- common/autotest_common.sh@10 -- # set +x 00:11:54.566 19:42:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:54.566 19:42:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:54.566 19:42:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:54.566 19:42:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:54.566 19:42:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:54.566 19:42:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:54.566 19:42:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:54.566 19:42:36 -- nvmf/common.sh@295 -- # net_devs=() 00:11:54.566 19:42:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:54.566 19:42:36 -- nvmf/common.sh@296 -- # e810=() 00:11:54.566 19:42:36 -- nvmf/common.sh@296 -- # local -ga e810 00:11:54.566 19:42:36 -- nvmf/common.sh@297 -- # x722=() 00:11:54.566 19:42:36 -- nvmf/common.sh@297 -- # local -ga x722 00:11:54.825 19:42:36 -- nvmf/common.sh@298 -- # mlx=() 00:11:54.825 19:42:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:54.825 19:42:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.825 19:42:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:54.825 19:42:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:54.825 19:42:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:54.825 19:42:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.825 19:42:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:54.825 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:54.825 19:42:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.825 19:42:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:54.825 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:54.825 19:42:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:54.825 19:42:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.825 19:42:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.825 19:42:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:54.825 19:42:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.825 19:42:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:54.825 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:54.825 19:42:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.825 19:42:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.825 19:42:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.825 19:42:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:54.825 19:42:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.825 19:42:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:54.825 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:54.825 19:42:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.825 19:42:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:54.825 19:42:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:54.825 19:42:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:54.825 19:42:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.825 19:42:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.825 19:42:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.825 19:42:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:54.825 19:42:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.825 19:42:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.825 19:42:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:54.825 19:42:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.825 19:42:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.825 19:42:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:54.825 19:42:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:54.825 19:42:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.825 19:42:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.825 19:42:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.825 19:42:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.825 19:42:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:54.825 19:42:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.825 19:42:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.825 19:42:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.825 19:42:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:54.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:11:54.825 00:11:54.825 --- 10.0.0.2 ping statistics --- 00:11:54.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.825 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:54.825 19:42:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:11:54.825 00:11:54.825 --- 10.0.0.1 ping statistics --- 00:11:54.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.825 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:11:54.825 19:42:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.825 19:42:36 -- nvmf/common.sh@411 -- # return 0 00:11:54.825 19:42:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:54.825 19:42:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.825 19:42:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:54.825 19:42:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.825 19:42:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:54.825 19:42:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:54.825 19:42:36 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:54.825 19:42:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:54.825 19:42:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:54.825 19:42:36 -- common/autotest_common.sh@10 -- # set +x 00:11:54.825 19:42:36 -- nvmf/common.sh@470 -- # nvmfpid=1663538 00:11:54.826 19:42:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:54.826 19:42:36 -- nvmf/common.sh@471 -- # waitforlisten 1663538 00:11:54.826 19:42:36 -- common/autotest_common.sh@817 -- # '[' -z 1663538 ']' 00:11:54.826 19:42:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.826 19:42:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:54.826 19:42:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.826 19:42:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:54.826 19:42:36 -- common/autotest_common.sh@10 -- # set +x 00:11:54.826 [2024-04-24 19:42:36.297789] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:11:54.826 [2024-04-24 19:42:36.297879] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.826 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.084 [2024-04-24 19:42:36.367793] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:55.084 [2024-04-24 19:42:36.482908] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.084 [2024-04-24 19:42:36.482977] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.084 [2024-04-24 19:42:36.483003] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.084 [2024-04-24 19:42:36.483017] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.084 [2024-04-24 19:42:36.483029] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.084 [2024-04-24 19:42:36.483120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.084 [2024-04-24 19:42:36.483194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.084 [2024-04-24 19:42:36.483197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.019 19:42:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:56.019 19:42:37 -- common/autotest_common.sh@850 -- # return 0 00:11:56.019 19:42:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:56.019 19:42:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:56.019 19:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:56.019 19:42:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.019 19:42:37 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:56.019 [2024-04-24 19:42:37.508131] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.019 19:42:37 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.277 19:42:37 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:56.277 19:42:37 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.535 19:42:38 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:56.535 19:42:38 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:56.793 19:42:38 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:57.051 19:42:38 -- target/nvmf_lvol.sh@29 -- # lvs=64db35dc-3678-46bf-bde5-481fc25b8bfa 00:11:57.051 19:42:38 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 64db35dc-3678-46bf-bde5-481fc25b8bfa lvol 20 00:11:57.309 19:42:38 -- target/nvmf_lvol.sh@32 -- # lvol=b5cdd8ef-cdeb-4ce6-afcd-b05020c17cbb 00:11:57.309 19:42:38 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:57.566 19:42:39 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b5cdd8ef-cdeb-4ce6-afcd-b05020c17cbb 00:11:57.824 19:42:39 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:58.083 [2024-04-24 19:42:39.491429] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.083 19:42:39 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:58.341 19:42:39 -- target/nvmf_lvol.sh@42 -- # perf_pid=1664007 00:11:58.341 19:42:39 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:58.341 19:42:39 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:58.341 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.279 19:42:40 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b5cdd8ef-cdeb-4ce6-afcd-b05020c17cbb MY_SNAPSHOT 00:11:59.846 19:42:41 -- target/nvmf_lvol.sh@47 -- # snapshot=2c1f1bab-2284-4d55-a2c3-1d04e75f693f 00:11:59.846 19:42:41 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b5cdd8ef-cdeb-4ce6-afcd-b05020c17cbb 30 00:12:00.104 19:42:41 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2c1f1bab-2284-4d55-a2c3-1d04e75f693f MY_CLONE 00:12:00.362 19:42:41 -- target/nvmf_lvol.sh@49 -- # clone=b27a3ec5-83ee-406f-bd65-9864b3349c3b 00:12:00.362 19:42:41 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b27a3ec5-83ee-406f-bd65-9864b3349c3b 00:12:00.929 19:42:42 -- target/nvmf_lvol.sh@53 -- # wait 1664007 00:12:09.038 Initializing NVMe Controllers 00:12:09.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:09.038 Controller IO queue size 128, less than required. 00:12:09.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:09.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:09.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:09.038 Initialization complete. Launching workers. 00:12:09.038 ======================================================== 00:12:09.038 Latency(us) 00:12:09.038 Device Information : IOPS MiB/s Average min max 00:12:09.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10583.80 41.34 12100.42 1093.67 74478.90 00:12:09.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10520.20 41.09 12174.54 1983.99 61531.20 00:12:09.038 ======================================================== 00:12:09.038 Total : 21104.00 82.44 12137.37 1093.67 74478.90 00:12:09.038 00:12:09.038 19:42:50 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:09.038 19:42:50 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b5cdd8ef-cdeb-4ce6-afcd-b05020c17cbb 00:12:09.296 19:42:50 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64db35dc-3678-46bf-bde5-481fc25b8bfa 00:12:09.557 19:42:50 -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:09.557 19:42:50 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:09.557 19:42:50 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:09.557 19:42:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:09.557 19:42:50 -- nvmf/common.sh@117 -- # sync 00:12:09.557 19:42:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.557 19:42:50 -- nvmf/common.sh@120 -- # set +e 00:12:09.557 19:42:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.557 19:42:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.557 rmmod nvme_tcp 00:12:09.557 rmmod nvme_fabrics 00:12:09.557 rmmod nvme_keyring 00:12:09.557 19:42:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.557 19:42:50 -- nvmf/common.sh@124 -- # set -e 00:12:09.557 19:42:50 -- nvmf/common.sh@125 -- # return 0 00:12:09.557 19:42:50 -- nvmf/common.sh@478 -- # '[' -n 1663538 ']' 00:12:09.557 19:42:50 -- nvmf/common.sh@479 -- # killprocess 1663538 00:12:09.557 19:42:50 -- common/autotest_common.sh@936 -- # '[' -z 1663538 ']' 00:12:09.557 19:42:50 -- common/autotest_common.sh@940 -- # kill -0 1663538 00:12:09.557 19:42:50 -- common/autotest_common.sh@941 -- # uname 00:12:09.557 19:42:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:09.557 19:42:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1663538 00:12:09.557 19:42:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:09.557 19:42:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:09.557 19:42:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1663538' 00:12:09.557 killing process with pid 1663538 00:12:09.557 19:42:51 -- common/autotest_common.sh@955 -- # kill 1663538 00:12:09.557 19:42:51 -- common/autotest_common.sh@960 -- # wait 1663538 00:12:10.124 19:42:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:10.124 19:42:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:10.124 19:42:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:10.124 19:42:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.124 19:42:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.124 19:42:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.124 19:42:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.124 19:42:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.027 19:42:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.027 00:12:12.027 real 0m19.398s 00:12:12.027 user 1m6.111s 00:12:12.027 sys 0m5.507s 00:12:12.027 19:42:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:12.027 19:42:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.027 ************************************ 00:12:12.027 END TEST nvmf_lvol 00:12:12.027 ************************************ 00:12:12.027 19:42:53 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:12.027 19:42:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:12.027 19:42:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:12.027 19:42:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.027 ************************************ 00:12:12.027 START TEST nvmf_lvs_grow 00:12:12.027 ************************************ 00:12:12.027 19:42:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:12.285 * Looking for test storage... 00:12:12.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.285 19:42:53 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.285 19:42:53 -- nvmf/common.sh@7 -- # uname -s 00:12:12.285 19:42:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.285 19:42:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.285 19:42:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.285 19:42:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.285 19:42:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.285 19:42:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.285 19:42:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.285 19:42:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.285 19:42:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.285 19:42:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.285 19:42:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.285 19:42:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.285 19:42:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.285 19:42:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.285 19:42:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.285 19:42:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.285 19:42:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.285 19:42:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.285 19:42:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.285 19:42:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.285 19:42:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.285 19:42:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.285 19:42:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.285 19:42:53 -- paths/export.sh@5 -- # export PATH 00:12:12.285 19:42:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.285 19:42:53 -- nvmf/common.sh@47 -- # : 0 00:12:12.285 19:42:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.285 19:42:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.285 19:42:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.285 19:42:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.285 19:42:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.285 19:42:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.285 19:42:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.285 19:42:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.286 19:42:53 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:12.286 19:42:53 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:12.286 19:42:53 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:12:12.286 19:42:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:12.286 19:42:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.286 19:42:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:12.286 19:42:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:12.286 19:42:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:12.286 19:42:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.286 19:42:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.286 19:42:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.286 19:42:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:12.286 19:42:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:12.286 19:42:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.286 19:42:53 -- common/autotest_common.sh@10 -- # set +x 00:12:14.183 19:42:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:14.183 19:42:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.183 19:42:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.183 19:42:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.183 19:42:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.183 19:42:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.183 19:42:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.183 19:42:55 -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.183 19:42:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.183 19:42:55 -- nvmf/common.sh@296 -- # e810=() 00:12:14.183 19:42:55 -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.183 19:42:55 -- nvmf/common.sh@297 -- # x722=() 00:12:14.183 19:42:55 -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.184 19:42:55 -- nvmf/common.sh@298 -- # mlx=() 00:12:14.184 19:42:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.184 19:42:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.184 19:42:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.184 19:42:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.184 19:42:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.184 19:42:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.184 19:42:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:14.184 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:14.184 19:42:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.184 19:42:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:14.184 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:14.184 19:42:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.184 19:42:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.184 19:42:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.184 19:42:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:14.184 19:42:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.184 19:42:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:14.184 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:14.184 19:42:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.184 19:42:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.184 19:42:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.184 19:42:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:14.184 19:42:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.184 19:42:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:14.184 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:14.184 19:42:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.184 19:42:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:14.184 19:42:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:14.184 19:42:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:14.184 19:42:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.184 19:42:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.184 19:42:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.184 19:42:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.184 19:42:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.184 19:42:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.184 19:42:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.184 19:42:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.184 19:42:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.184 19:42:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.184 19:42:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.184 19:42:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.184 19:42:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.184 19:42:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.184 19:42:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.184 19:42:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.184 19:42:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.184 19:42:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.184 19:42:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.184 19:42:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:12:14.184 00:12:14.184 --- 10.0.0.2 ping statistics --- 00:12:14.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.184 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:12:14.184 19:42:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:12:14.184 00:12:14.184 --- 10.0.0.1 ping statistics --- 00:12:14.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.184 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:12:14.184 19:42:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.184 19:42:55 -- nvmf/common.sh@411 -- # return 0 00:12:14.184 19:42:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:14.184 19:42:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.184 19:42:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:14.184 19:42:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.184 19:42:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:14.184 19:42:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:14.184 19:42:55 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:12:14.184 19:42:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:14.184 19:42:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:14.184 19:42:55 -- common/autotest_common.sh@10 -- # set +x 00:12:14.184 19:42:55 -- nvmf/common.sh@470 -- # nvmfpid=1667280 00:12:14.184 19:42:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:14.184 19:42:55 -- nvmf/common.sh@471 -- # waitforlisten 1667280 00:12:14.184 19:42:55 -- common/autotest_common.sh@817 -- # '[' -z 1667280 ']' 00:12:14.184 19:42:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.184 19:42:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:14.184 19:42:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.184 19:42:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:14.184 19:42:55 -- common/autotest_common.sh@10 -- # set +x 00:12:14.184 [2024-04-24 19:42:55.685711] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:12:14.184 [2024-04-24 19:42:55.685796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.443 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.443 [2024-04-24 19:42:55.752413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.443 [2024-04-24 19:42:55.857065] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.443 [2024-04-24 19:42:55.857122] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.443 [2024-04-24 19:42:55.857147] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.443 [2024-04-24 19:42:55.857173] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.443 [2024-04-24 19:42:55.857183] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.443 [2024-04-24 19:42:55.857214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.701 19:42:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:14.701 19:42:55 -- common/autotest_common.sh@850 -- # return 0 00:12:14.701 19:42:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:14.701 19:42:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:14.701 19:42:55 -- common/autotest_common.sh@10 -- # set +x 00:12:14.701 19:42:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.701 19:42:55 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:14.701 [2024-04-24 19:42:56.208779] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:12:14.961 19:42:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:14.961 19:42:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.961 19:42:56 -- common/autotest_common.sh@10 -- # set +x 00:12:14.961 ************************************ 00:12:14.961 START TEST lvs_grow_clean 00:12:14.961 ************************************ 00:12:14.961 19:42:56 -- common/autotest_common.sh@1111 -- # lvs_grow 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:14.961 19:42:56 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:15.220 19:42:56 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:15.220 19:42:56 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:15.477 19:42:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:15.477 19:42:56 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:15.477 19:42:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:15.735 19:42:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:15.735 19:42:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:15.735 19:42:57 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a25bec9e-ce17-4d31-8e36-dae90357fc9a lvol 150 00:12:15.992 19:42:57 -- target/nvmf_lvs_grow.sh@33 -- # lvol=09a9a758-4f1b-406c-ae86-dc212c387c77 00:12:15.992 19:42:57 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:15.992 19:42:57 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:16.250 [2024-04-24 19:42:57.641830] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:16.250 [2024-04-24 19:42:57.641914] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:16.250 true 00:12:16.250 19:42:57 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:16.250 19:42:57 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:16.507 19:42:57 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:16.507 19:42:57 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:16.765 19:42:58 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 09a9a758-4f1b-406c-ae86-dc212c387c77 00:12:17.023 19:42:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:17.312 [2024-04-24 19:42:58.701073] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.312 19:42:58 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:17.576 19:42:58 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1667723 00:12:17.576 19:42:58 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:17.576 19:42:58 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:17.576 19:42:58 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1667723 /var/tmp/bdevperf.sock 00:12:17.576 19:42:58 -- common/autotest_common.sh@817 -- # '[' -z 1667723 ']' 00:12:17.576 19:42:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:17.576 19:42:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:17.576 19:42:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:17.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:17.576 19:42:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:17.576 19:42:58 -- common/autotest_common.sh@10 -- # set +x 00:12:17.576 [2024-04-24 19:42:59.028121] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:12:17.576 [2024-04-24 19:42:59.028206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667723 ] 00:12:17.576 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.835 [2024-04-24 19:42:59.090473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.835 [2024-04-24 19:42:59.205384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.767 19:42:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:18.767 19:42:59 -- common/autotest_common.sh@850 -- # return 0 00:12:18.767 19:42:59 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:19.023 Nvme0n1 00:12:19.023 19:43:00 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:19.281 [ 00:12:19.281 { 00:12:19.281 "name": "Nvme0n1", 00:12:19.281 "aliases": [ 00:12:19.281 "09a9a758-4f1b-406c-ae86-dc212c387c77" 00:12:19.281 ], 00:12:19.281 "product_name": "NVMe disk", 00:12:19.281 "block_size": 4096, 00:12:19.281 "num_blocks": 38912, 00:12:19.281 "uuid": "09a9a758-4f1b-406c-ae86-dc212c387c77", 00:12:19.281 "assigned_rate_limits": { 00:12:19.281 "rw_ios_per_sec": 0, 00:12:19.281 "rw_mbytes_per_sec": 0, 00:12:19.281 "r_mbytes_per_sec": 0, 00:12:19.281 "w_mbytes_per_sec": 0 00:12:19.281 }, 00:12:19.281 "claimed": false, 00:12:19.281 "zoned": false, 00:12:19.281 "supported_io_types": { 00:12:19.281 "read": true, 00:12:19.281 "write": true, 00:12:19.281 "unmap": true, 00:12:19.281 "write_zeroes": true, 00:12:19.281 "flush": true, 00:12:19.281 "reset": true, 00:12:19.281 "compare": true, 00:12:19.281 "compare_and_write": true, 00:12:19.281 "abort": true, 00:12:19.281 "nvme_admin": true, 00:12:19.281 "nvme_io": true 00:12:19.281 }, 00:12:19.281 "memory_domains": [ 00:12:19.281 { 00:12:19.281 "dma_device_id": "system", 00:12:19.281 "dma_device_type": 1 00:12:19.281 } 00:12:19.281 ], 00:12:19.281 "driver_specific": { 00:12:19.281 "nvme": [ 00:12:19.281 { 00:12:19.281 "trid": { 00:12:19.281 "trtype": "TCP", 00:12:19.281 "adrfam": "IPv4", 00:12:19.281 "traddr": "10.0.0.2", 00:12:19.281 "trsvcid": "4420", 00:12:19.281 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:19.281 }, 00:12:19.281 "ctrlr_data": { 00:12:19.281 "cntlid": 1, 00:12:19.281 "vendor_id": "0x8086", 00:12:19.281 "model_number": "SPDK bdev Controller", 00:12:19.281 "serial_number": "SPDK0", 00:12:19.281 "firmware_revision": "24.05", 00:12:19.281 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:19.281 "oacs": { 00:12:19.281 "security": 0, 00:12:19.281 "format": 0, 00:12:19.281 "firmware": 0, 00:12:19.281 "ns_manage": 0 00:12:19.281 }, 00:12:19.281 "multi_ctrlr": true, 00:12:19.281 "ana_reporting": false 00:12:19.281 }, 00:12:19.281 "vs": { 00:12:19.281 "nvme_version": "1.3" 00:12:19.281 }, 00:12:19.281 "ns_data": { 00:12:19.281 "id": 1, 00:12:19.281 "can_share": true 00:12:19.281 } 00:12:19.281 } 00:12:19.281 ], 00:12:19.281 "mp_policy": "active_passive" 00:12:19.281 } 00:12:19.281 } 00:12:19.281 ] 00:12:19.281 19:43:00 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1667869 00:12:19.281 19:43:00 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:19.281 19:43:00 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:19.281 Running I/O for 10 seconds... 00:12:20.656 Latency(us) 00:12:20.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.656 Nvme0n1 : 1.00 14151.00 55.28 0.00 0.00 0.00 0.00 0.00 00:12:20.656 =================================================================================================================== 00:12:20.656 Total : 14151.00 55.28 0.00 0.00 0.00 0.00 0.00 00:12:20.656 00:12:21.221 19:43:02 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:21.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.479 Nvme0n1 : 2.00 14179.50 55.39 0.00 0.00 0.00 0.00 0.00 00:12:21.479 =================================================================================================================== 00:12:21.479 Total : 14179.50 55.39 0.00 0.00 0.00 0.00 0.00 00:12:21.479 00:12:21.479 true 00:12:21.479 19:43:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:21.479 19:43:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:21.737 19:43:03 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:21.737 19:43:03 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:21.737 19:43:03 -- target/nvmf_lvs_grow.sh@65 -- # wait 1667869 00:12:22.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.302 Nvme0n1 : 3.00 14188.67 55.42 0.00 0.00 0.00 0.00 0.00 00:12:22.302 =================================================================================================================== 00:12:22.302 Total : 14188.67 55.42 0.00 0.00 0.00 0.00 0.00 00:12:22.302 00:12:23.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.675 Nvme0n1 : 4.00 14273.75 55.76 0.00 0.00 0.00 0.00 0.00 00:12:23.675 =================================================================================================================== 00:12:23.675 Total : 14273.75 55.76 0.00 0.00 0.00 0.00 0.00 00:12:23.675 00:12:24.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.610 Nvme0n1 : 5.00 14298.80 55.85 0.00 0.00 0.00 0.00 0.00 00:12:24.610 =================================================================================================================== 00:12:24.610 Total : 14298.80 55.85 0.00 0.00 0.00 0.00 0.00 00:12:24.610 00:12:25.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:25.545 Nvme0n1 : 6.00 14315.83 55.92 0.00 0.00 0.00 0.00 0.00 00:12:25.545 =================================================================================================================== 00:12:25.545 Total : 14315.83 55.92 0.00 0.00 0.00 0.00 0.00 00:12:25.545 00:12:26.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:26.479 Nvme0n1 : 7.00 14337.00 56.00 0.00 0.00 0.00 0.00 0.00 00:12:26.479 =================================================================================================================== 00:12:26.479 Total : 14337.00 56.00 0.00 0.00 0.00 0.00 0.00 00:12:26.479 00:12:27.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:27.415 Nvme0n1 : 8.00 14392.75 56.22 0.00 0.00 0.00 0.00 0.00 00:12:27.415 =================================================================================================================== 00:12:27.415 Total : 14392.75 56.22 0.00 0.00 0.00 0.00 0.00 00:12:27.415 00:12:28.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:28.351 Nvme0n1 : 9.00 14400.78 56.25 0.00 0.00 0.00 0.00 0.00 00:12:28.351 =================================================================================================================== 00:12:28.351 Total : 14400.78 56.25 0.00 0.00 0.00 0.00 0.00 00:12:28.351 00:12:29.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.286 Nvme0n1 : 10.00 14432.60 56.38 0.00 0.00 0.00 0.00 0.00 00:12:29.286 =================================================================================================================== 00:12:29.286 Total : 14432.60 56.38 0.00 0.00 0.00 0.00 0.00 00:12:29.286 00:12:29.286 00:12:29.286 Latency(us) 00:12:29.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:29.286 Nvme0n1 : 10.01 14437.98 56.40 0.00 0.00 8860.28 5218.61 17767.54 00:12:29.286 =================================================================================================================== 00:12:29.286 Total : 14437.98 56.40 0.00 0.00 8860.28 5218.61 17767.54 00:12:29.286 0 00:12:29.545 19:43:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1667723 00:12:29.545 19:43:10 -- common/autotest_common.sh@936 -- # '[' -z 1667723 ']' 00:12:29.545 19:43:10 -- common/autotest_common.sh@940 -- # kill -0 1667723 00:12:29.545 19:43:10 -- common/autotest_common.sh@941 -- # uname 00:12:29.545 19:43:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.545 19:43:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1667723 00:12:29.545 19:43:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:29.545 19:43:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:29.545 19:43:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1667723' 00:12:29.545 killing process with pid 1667723 00:12:29.545 19:43:10 -- common/autotest_common.sh@955 -- # kill 1667723 00:12:29.545 Received shutdown signal, test time was about 10.000000 seconds 00:12:29.545 00:12:29.545 Latency(us) 00:12:29.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.545 =================================================================================================================== 00:12:29.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:29.545 19:43:10 -- common/autotest_common.sh@960 -- # wait 1667723 00:12:29.804 19:43:11 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:30.062 19:43:11 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:30.062 19:43:11 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:30.326 19:43:11 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:30.326 19:43:11 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:12:30.326 19:43:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:30.613 [2024-04-24 19:43:11.869377] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:30.613 19:43:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:30.613 19:43:11 -- common/autotest_common.sh@638 -- # local es=0 00:12:30.613 19:43:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:30.613 19:43:11 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.613 19:43:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:30.613 19:43:11 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.613 19:43:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:30.613 19:43:11 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.613 19:43:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:30.613 19:43:11 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.613 19:43:11 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:30.613 19:43:11 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:30.871 request: 00:12:30.871 { 00:12:30.871 "uuid": "a25bec9e-ce17-4d31-8e36-dae90357fc9a", 00:12:30.871 "method": "bdev_lvol_get_lvstores", 00:12:30.871 "req_id": 1 00:12:30.871 } 00:12:30.871 Got JSON-RPC error response 00:12:30.871 response: 00:12:30.871 { 00:12:30.871 "code": -19, 00:12:30.871 "message": "No such device" 00:12:30.871 } 00:12:30.871 19:43:12 -- common/autotest_common.sh@641 -- # es=1 00:12:30.871 19:43:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:30.871 19:43:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:30.871 19:43:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:30.871 19:43:12 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:31.129 aio_bdev 00:12:31.129 19:43:12 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 09a9a758-4f1b-406c-ae86-dc212c387c77 00:12:31.129 19:43:12 -- common/autotest_common.sh@885 -- # local bdev_name=09a9a758-4f1b-406c-ae86-dc212c387c77 00:12:31.129 19:43:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:31.129 19:43:12 -- common/autotest_common.sh@887 -- # local i 00:12:31.129 19:43:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:31.129 19:43:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:31.129 19:43:12 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:31.388 19:43:12 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 09a9a758-4f1b-406c-ae86-dc212c387c77 -t 2000 00:12:31.388 [ 00:12:31.388 { 00:12:31.388 "name": "09a9a758-4f1b-406c-ae86-dc212c387c77", 00:12:31.388 "aliases": [ 00:12:31.388 "lvs/lvol" 00:12:31.388 ], 00:12:31.388 "product_name": "Logical Volume", 00:12:31.388 "block_size": 4096, 00:12:31.388 "num_blocks": 38912, 00:12:31.388 "uuid": "09a9a758-4f1b-406c-ae86-dc212c387c77", 00:12:31.388 "assigned_rate_limits": { 00:12:31.388 "rw_ios_per_sec": 0, 00:12:31.388 "rw_mbytes_per_sec": 0, 00:12:31.388 "r_mbytes_per_sec": 0, 00:12:31.388 "w_mbytes_per_sec": 0 00:12:31.388 }, 00:12:31.388 "claimed": false, 00:12:31.388 "zoned": false, 00:12:31.388 "supported_io_types": { 00:12:31.388 "read": true, 00:12:31.388 "write": true, 00:12:31.388 "unmap": true, 00:12:31.388 "write_zeroes": true, 00:12:31.388 "flush": false, 00:12:31.388 "reset": true, 00:12:31.388 "compare": false, 00:12:31.388 "compare_and_write": false, 00:12:31.388 "abort": false, 00:12:31.388 "nvme_admin": false, 00:12:31.388 "nvme_io": false 00:12:31.388 }, 00:12:31.388 "driver_specific": { 00:12:31.388 "lvol": { 00:12:31.388 "lvol_store_uuid": "a25bec9e-ce17-4d31-8e36-dae90357fc9a", 00:12:31.388 "base_bdev": "aio_bdev", 00:12:31.388 "thin_provision": false, 00:12:31.388 "snapshot": false, 00:12:31.388 "clone": false, 00:12:31.388 "esnap_clone": false 00:12:31.388 } 00:12:31.388 } 00:12:31.388 } 00:12:31.388 ] 00:12:31.646 19:43:12 -- common/autotest_common.sh@893 -- # return 0 00:12:31.646 19:43:12 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:31.646 19:43:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:31.646 19:43:13 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:31.646 19:43:13 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:31.646 19:43:13 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:31.905 19:43:13 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:31.905 19:43:13 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 09a9a758-4f1b-406c-ae86-dc212c387c77 00:12:32.163 19:43:13 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a25bec9e-ce17-4d31-8e36-dae90357fc9a 00:12:32.420 19:43:13 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:32.678 19:43:14 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:32.678 00:12:32.678 real 0m17.831s 00:12:32.678 user 0m17.437s 00:12:32.678 sys 0m1.872s 00:12:32.678 19:43:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:32.678 19:43:14 -- common/autotest_common.sh@10 -- # set +x 00:12:32.678 ************************************ 00:12:32.678 END TEST lvs_grow_clean 00:12:32.678 ************************************ 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:32.935 19:43:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:32.935 19:43:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.935 19:43:14 -- common/autotest_common.sh@10 -- # set +x 00:12:32.935 ************************************ 00:12:32.935 START TEST lvs_grow_dirty 00:12:32.935 ************************************ 00:12:32.935 19:43:14 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:32.935 19:43:14 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:33.193 19:43:14 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:33.193 19:43:14 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:33.451 19:43:14 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:33.451 19:43:14 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:33.451 19:43:14 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:33.709 19:43:15 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:33.709 19:43:15 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:33.709 19:43:15 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 lvol 150 00:12:33.967 19:43:15 -- target/nvmf_lvs_grow.sh@33 -- # lvol=f2912bb5-afe4-479c-98ca-53b9ec36c9ee 00:12:33.967 19:43:15 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:33.967 19:43:15 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:34.225 [2024-04-24 19:43:15.612936] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:34.225 [2024-04-24 19:43:15.613052] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:34.225 true 00:12:34.225 19:43:15 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:34.225 19:43:15 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:34.483 19:43:15 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:34.483 19:43:15 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:34.741 19:43:16 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2912bb5-afe4-479c-98ca-53b9ec36c9ee 00:12:34.999 19:43:16 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:35.257 19:43:16 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:35.515 19:43:16 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1669788 00:12:35.516 19:43:16 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:35.516 19:43:16 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:35.516 19:43:16 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1669788 /var/tmp/bdevperf.sock 00:12:35.516 19:43:16 -- common/autotest_common.sh@817 -- # '[' -z 1669788 ']' 00:12:35.516 19:43:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:35.516 19:43:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:35.516 19:43:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:35.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:35.516 19:43:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:35.516 19:43:16 -- common/autotest_common.sh@10 -- # set +x 00:12:35.516 [2024-04-24 19:43:16.896833] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:12:35.516 [2024-04-24 19:43:16.896908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1669788 ] 00:12:35.516 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.516 [2024-04-24 19:43:16.960334] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.774 [2024-04-24 19:43:17.084501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.774 19:43:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:35.774 19:43:17 -- common/autotest_common.sh@850 -- # return 0 00:12:35.774 19:43:17 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:36.032 Nvme0n1 00:12:36.032 19:43:17 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:36.290 [ 00:12:36.290 { 00:12:36.290 "name": "Nvme0n1", 00:12:36.290 "aliases": [ 00:12:36.290 "f2912bb5-afe4-479c-98ca-53b9ec36c9ee" 00:12:36.290 ], 00:12:36.290 "product_name": "NVMe disk", 00:12:36.290 "block_size": 4096, 00:12:36.290 "num_blocks": 38912, 00:12:36.290 "uuid": "f2912bb5-afe4-479c-98ca-53b9ec36c9ee", 00:12:36.290 "assigned_rate_limits": { 00:12:36.290 "rw_ios_per_sec": 0, 00:12:36.290 "rw_mbytes_per_sec": 0, 00:12:36.290 "r_mbytes_per_sec": 0, 00:12:36.290 "w_mbytes_per_sec": 0 00:12:36.290 }, 00:12:36.290 "claimed": false, 00:12:36.290 "zoned": false, 00:12:36.290 "supported_io_types": { 00:12:36.290 "read": true, 00:12:36.290 "write": true, 00:12:36.290 "unmap": true, 00:12:36.290 "write_zeroes": true, 00:12:36.290 "flush": true, 00:12:36.290 "reset": true, 00:12:36.290 "compare": true, 00:12:36.290 "compare_and_write": true, 00:12:36.290 "abort": true, 00:12:36.290 "nvme_admin": true, 00:12:36.290 "nvme_io": true 00:12:36.290 }, 00:12:36.290 "memory_domains": [ 00:12:36.290 { 00:12:36.290 "dma_device_id": "system", 00:12:36.290 "dma_device_type": 1 00:12:36.290 } 00:12:36.290 ], 00:12:36.290 "driver_specific": { 00:12:36.290 "nvme": [ 00:12:36.290 { 00:12:36.290 "trid": { 00:12:36.290 "trtype": "TCP", 00:12:36.290 "adrfam": "IPv4", 00:12:36.290 "traddr": "10.0.0.2", 00:12:36.290 "trsvcid": "4420", 00:12:36.290 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:36.290 }, 00:12:36.290 "ctrlr_data": { 00:12:36.290 "cntlid": 1, 00:12:36.290 "vendor_id": "0x8086", 00:12:36.290 "model_number": "SPDK bdev Controller", 00:12:36.290 "serial_number": "SPDK0", 00:12:36.290 "firmware_revision": "24.05", 00:12:36.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:36.290 "oacs": { 00:12:36.290 "security": 0, 00:12:36.290 "format": 0, 00:12:36.290 "firmware": 0, 00:12:36.290 "ns_manage": 0 00:12:36.290 }, 00:12:36.290 "multi_ctrlr": true, 00:12:36.290 "ana_reporting": false 00:12:36.290 }, 00:12:36.290 "vs": { 00:12:36.290 "nvme_version": "1.3" 00:12:36.290 }, 00:12:36.290 "ns_data": { 00:12:36.290 "id": 1, 00:12:36.290 "can_share": true 00:12:36.290 } 00:12:36.290 } 00:12:36.290 ], 00:12:36.290 "mp_policy": "active_passive" 00:12:36.290 } 00:12:36.290 } 00:12:36.290 ] 00:12:36.290 19:43:17 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1669924 00:12:36.290 19:43:17 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:36.290 19:43:17 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:36.549 Running I/O for 10 seconds... 00:12:37.486 Latency(us) 00:12:37.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.486 Nvme0n1 : 1.00 14112.00 55.12 0.00 0.00 0.00 0.00 0.00 00:12:37.486 =================================================================================================================== 00:12:37.486 Total : 14112.00 55.12 0.00 0.00 0.00 0.00 0.00 00:12:37.486 00:12:38.455 19:43:19 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:38.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.455 Nvme0n1 : 2.00 14151.50 55.28 0.00 0.00 0.00 0.00 0.00 00:12:38.455 =================================================================================================================== 00:12:38.455 Total : 14151.50 55.28 0.00 0.00 0.00 0.00 0.00 00:12:38.455 00:12:38.714 true 00:12:38.714 19:43:19 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:38.714 19:43:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:38.972 19:43:20 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:38.972 19:43:20 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:38.972 19:43:20 -- target/nvmf_lvs_grow.sh@65 -- # wait 1669924 00:12:39.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.539 Nvme0n1 : 3.00 14170.00 55.35 0.00 0.00 0.00 0.00 0.00 00:12:39.539 =================================================================================================================== 00:12:39.539 Total : 14170.00 55.35 0.00 0.00 0.00 0.00 0.00 00:12:39.539 00:12:40.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.474 Nvme0n1 : 4.00 14291.50 55.83 0.00 0.00 0.00 0.00 0.00 00:12:40.474 =================================================================================================================== 00:12:40.474 Total : 14291.50 55.83 0.00 0.00 0.00 0.00 0.00 00:12:40.474 00:12:41.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.409 Nvme0n1 : 5.00 14326.00 55.96 0.00 0.00 0.00 0.00 0.00 00:12:41.409 =================================================================================================================== 00:12:41.409 Total : 14326.00 55.96 0.00 0.00 0.00 0.00 0.00 00:12:41.409 00:12:42.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.344 Nvme0n1 : 6.00 14359.67 56.09 0.00 0.00 0.00 0.00 0.00 00:12:42.344 =================================================================================================================== 00:12:42.344 Total : 14359.67 56.09 0.00 0.00 0.00 0.00 0.00 00:12:42.344 00:12:43.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.724 Nvme0n1 : 7.00 14383.57 56.19 0.00 0.00 0.00 0.00 0.00 00:12:43.724 =================================================================================================================== 00:12:43.724 Total : 14383.57 56.19 0.00 0.00 0.00 0.00 0.00 00:12:43.724 00:12:44.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.657 Nvme0n1 : 8.00 14433.75 56.38 0.00 0.00 0.00 0.00 0.00 00:12:44.657 =================================================================================================================== 00:12:44.657 Total : 14433.75 56.38 0.00 0.00 0.00 0.00 0.00 00:12:44.657 00:12:45.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.591 Nvme0n1 : 9.00 14465.56 56.51 0.00 0.00 0.00 0.00 0.00 00:12:45.591 =================================================================================================================== 00:12:45.591 Total : 14465.56 56.51 0.00 0.00 0.00 0.00 0.00 00:12:45.591 00:12:46.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.524 Nvme0n1 : 10.00 14478.20 56.56 0.00 0.00 0.00 0.00 0.00 00:12:46.524 =================================================================================================================== 00:12:46.524 Total : 14478.20 56.56 0.00 0.00 0.00 0.00 0.00 00:12:46.524 00:12:46.524 00:12:46.524 Latency(us) 00:12:46.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.524 Nvme0n1 : 10.00 14477.35 56.55 0.00 0.00 8835.38 4466.16 15728.64 00:12:46.524 =================================================================================================================== 00:12:46.524 Total : 14477.35 56.55 0.00 0.00 8835.38 4466.16 15728.64 00:12:46.524 0 00:12:46.524 19:43:27 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1669788 00:12:46.524 19:43:27 -- common/autotest_common.sh@936 -- # '[' -z 1669788 ']' 00:12:46.524 19:43:27 -- common/autotest_common.sh@940 -- # kill -0 1669788 00:12:46.524 19:43:27 -- common/autotest_common.sh@941 -- # uname 00:12:46.524 19:43:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:46.524 19:43:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1669788 00:12:46.524 19:43:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:46.524 19:43:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:46.524 19:43:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1669788' 00:12:46.524 killing process with pid 1669788 00:12:46.524 19:43:27 -- common/autotest_common.sh@955 -- # kill 1669788 00:12:46.524 Received shutdown signal, test time was about 10.000000 seconds 00:12:46.524 00:12:46.524 Latency(us) 00:12:46.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.524 =================================================================================================================== 00:12:46.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:46.524 19:43:27 -- common/autotest_common.sh@960 -- # wait 1669788 00:12:46.782 19:43:28 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:47.039 19:43:28 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:47.039 19:43:28 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:47.297 19:43:28 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:47.297 19:43:28 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:12:47.297 19:43:28 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1667280 00:12:47.297 19:43:28 -- target/nvmf_lvs_grow.sh@74 -- # wait 1667280 00:12:47.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1667280 Killed "${NVMF_APP[@]}" "$@" 00:12:47.297 19:43:28 -- target/nvmf_lvs_grow.sh@74 -- # true 00:12:47.297 19:43:28 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:12:47.297 19:43:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:47.297 19:43:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:47.297 19:43:28 -- common/autotest_common.sh@10 -- # set +x 00:12:47.555 19:43:28 -- nvmf/common.sh@470 -- # nvmfpid=1671249 00:12:47.555 19:43:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:47.555 19:43:28 -- nvmf/common.sh@471 -- # waitforlisten 1671249 00:12:47.555 19:43:28 -- common/autotest_common.sh@817 -- # '[' -z 1671249 ']' 00:12:47.555 19:43:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.555 19:43:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:47.555 19:43:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.555 19:43:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:47.555 19:43:28 -- common/autotest_common.sh@10 -- # set +x 00:12:47.555 [2024-04-24 19:43:28.859059] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:12:47.555 [2024-04-24 19:43:28.859133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.555 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.555 [2024-04-24 19:43:28.929227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.555 [2024-04-24 19:43:29.044020] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.555 [2024-04-24 19:43:29.044082] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.555 [2024-04-24 19:43:29.044104] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.555 [2024-04-24 19:43:29.044116] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.555 [2024-04-24 19:43:29.044127] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.555 [2024-04-24 19:43:29.044172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.487 19:43:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:48.487 19:43:29 -- common/autotest_common.sh@850 -- # return 0 00:12:48.487 19:43:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:48.487 19:43:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:48.487 19:43:29 -- common/autotest_common.sh@10 -- # set +x 00:12:48.487 19:43:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.487 19:43:29 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:48.745 [2024-04-24 19:43:30.091285] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:48.745 [2024-04-24 19:43:30.091450] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:48.745 [2024-04-24 19:43:30.091498] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:48.745 19:43:30 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:12:48.745 19:43:30 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev f2912bb5-afe4-479c-98ca-53b9ec36c9ee 00:12:48.745 19:43:30 -- common/autotest_common.sh@885 -- # local bdev_name=f2912bb5-afe4-479c-98ca-53b9ec36c9ee 00:12:48.745 19:43:30 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:48.745 19:43:30 -- common/autotest_common.sh@887 -- # local i 00:12:48.745 19:43:30 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:48.745 19:43:30 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:48.745 19:43:30 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:49.003 19:43:30 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2912bb5-afe4-479c-98ca-53b9ec36c9ee -t 2000 00:12:49.261 [ 00:12:49.261 { 00:12:49.261 "name": "f2912bb5-afe4-479c-98ca-53b9ec36c9ee", 00:12:49.261 "aliases": [ 00:12:49.261 "lvs/lvol" 00:12:49.261 ], 00:12:49.261 "product_name": "Logical Volume", 00:12:49.261 "block_size": 4096, 00:12:49.261 "num_blocks": 38912, 00:12:49.261 "uuid": "f2912bb5-afe4-479c-98ca-53b9ec36c9ee", 00:12:49.261 "assigned_rate_limits": { 00:12:49.261 "rw_ios_per_sec": 0, 00:12:49.261 "rw_mbytes_per_sec": 0, 00:12:49.261 "r_mbytes_per_sec": 0, 00:12:49.261 "w_mbytes_per_sec": 0 00:12:49.261 }, 00:12:49.261 "claimed": false, 00:12:49.261 "zoned": false, 00:12:49.261 "supported_io_types": { 00:12:49.261 "read": true, 00:12:49.261 "write": true, 00:12:49.261 "unmap": true, 00:12:49.261 "write_zeroes": true, 00:12:49.261 "flush": false, 00:12:49.261 "reset": true, 00:12:49.261 "compare": false, 00:12:49.261 "compare_and_write": false, 00:12:49.261 "abort": false, 00:12:49.261 "nvme_admin": false, 00:12:49.261 "nvme_io": false 00:12:49.261 }, 00:12:49.261 "driver_specific": { 00:12:49.261 "lvol": { 00:12:49.261 "lvol_store_uuid": "a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2", 00:12:49.261 "base_bdev": "aio_bdev", 00:12:49.261 "thin_provision": false, 00:12:49.261 "snapshot": false, 00:12:49.261 "clone": false, 00:12:49.261 "esnap_clone": false 00:12:49.261 } 00:12:49.261 } 00:12:49.261 } 00:12:49.261 ] 00:12:49.261 19:43:30 -- common/autotest_common.sh@893 -- # return 0 00:12:49.261 19:43:30 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:49.261 19:43:30 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:12:49.519 19:43:30 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:12:49.519 19:43:30 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:49.519 19:43:30 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:12:49.777 19:43:31 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:12:49.777 19:43:31 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:50.035 [2024-04-24 19:43:31.311913] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:50.035 19:43:31 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:50.035 19:43:31 -- common/autotest_common.sh@638 -- # local es=0 00:12:50.035 19:43:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:50.035 19:43:31 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.035 19:43:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:50.035 19:43:31 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.035 19:43:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:50.035 19:43:31 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.035 19:43:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:50.035 19:43:31 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.035 19:43:31 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:50.035 19:43:31 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:50.293 request: 00:12:50.293 { 00:12:50.293 "uuid": "a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2", 00:12:50.293 "method": "bdev_lvol_get_lvstores", 00:12:50.293 "req_id": 1 00:12:50.293 } 00:12:50.293 Got JSON-RPC error response 00:12:50.293 response: 00:12:50.293 { 00:12:50.293 "code": -19, 00:12:50.293 "message": "No such device" 00:12:50.293 } 00:12:50.293 19:43:31 -- common/autotest_common.sh@641 -- # es=1 00:12:50.293 19:43:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:50.293 19:43:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:50.293 19:43:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:50.293 19:43:31 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:50.551 aio_bdev 00:12:50.551 19:43:31 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev f2912bb5-afe4-479c-98ca-53b9ec36c9ee 00:12:50.551 19:43:31 -- common/autotest_common.sh@885 -- # local bdev_name=f2912bb5-afe4-479c-98ca-53b9ec36c9ee 00:12:50.551 19:43:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:50.551 19:43:31 -- common/autotest_common.sh@887 -- # local i 00:12:50.551 19:43:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:50.551 19:43:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:50.551 19:43:31 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:50.809 19:43:32 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2912bb5-afe4-479c-98ca-53b9ec36c9ee -t 2000 00:12:50.809 [ 00:12:50.809 { 00:12:50.809 "name": "f2912bb5-afe4-479c-98ca-53b9ec36c9ee", 00:12:50.809 "aliases": [ 00:12:50.809 "lvs/lvol" 00:12:50.809 ], 00:12:50.809 "product_name": "Logical Volume", 00:12:50.809 "block_size": 4096, 00:12:50.809 "num_blocks": 38912, 00:12:50.809 "uuid": "f2912bb5-afe4-479c-98ca-53b9ec36c9ee", 00:12:50.809 "assigned_rate_limits": { 00:12:50.809 "rw_ios_per_sec": 0, 00:12:50.809 "rw_mbytes_per_sec": 0, 00:12:50.809 "r_mbytes_per_sec": 0, 00:12:50.809 "w_mbytes_per_sec": 0 00:12:50.809 }, 00:12:50.809 "claimed": false, 00:12:50.809 "zoned": false, 00:12:50.809 "supported_io_types": { 00:12:50.809 "read": true, 00:12:50.809 "write": true, 00:12:50.809 "unmap": true, 00:12:50.809 "write_zeroes": true, 00:12:50.809 "flush": false, 00:12:50.809 "reset": true, 00:12:50.810 "compare": false, 00:12:50.810 "compare_and_write": false, 00:12:50.810 "abort": false, 00:12:50.810 "nvme_admin": false, 00:12:50.810 "nvme_io": false 00:12:50.810 }, 00:12:50.810 "driver_specific": { 00:12:50.810 "lvol": { 00:12:50.810 "lvol_store_uuid": "a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2", 00:12:50.810 "base_bdev": "aio_bdev", 00:12:50.810 "thin_provision": false, 00:12:50.810 "snapshot": false, 00:12:50.810 "clone": false, 00:12:50.810 "esnap_clone": false 00:12:50.810 } 00:12:50.810 } 00:12:50.810 } 00:12:50.810 ] 00:12:50.810 19:43:32 -- common/autotest_common.sh@893 -- # return 0 00:12:50.810 19:43:32 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:50.810 19:43:32 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:51.067 19:43:32 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:51.068 19:43:32 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:51.068 19:43:32 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:51.325 19:43:32 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:51.325 19:43:32 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f2912bb5-afe4-479c-98ca-53b9ec36c9ee 00:12:51.891 19:43:33 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a28c8f6e-381c-4d88-bd6e-bc9fa0b8c8b2 00:12:52.149 19:43:33 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:52.149 19:43:33 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:52.407 00:12:52.407 real 0m19.376s 00:12:52.407 user 0m48.318s 00:12:52.407 sys 0m4.616s 00:12:52.407 19:43:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:52.407 19:43:33 -- common/autotest_common.sh@10 -- # set +x 00:12:52.407 ************************************ 00:12:52.407 END TEST lvs_grow_dirty 00:12:52.407 ************************************ 00:12:52.407 19:43:33 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:52.407 19:43:33 -- common/autotest_common.sh@794 -- # type=--id 00:12:52.407 19:43:33 -- common/autotest_common.sh@795 -- # id=0 00:12:52.407 19:43:33 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:52.407 19:43:33 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:52.407 19:43:33 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:52.407 19:43:33 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:52.407 19:43:33 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:52.407 19:43:33 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:52.407 nvmf_trace.0 00:12:52.407 19:43:33 -- common/autotest_common.sh@809 -- # return 0 00:12:52.407 19:43:33 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:52.407 19:43:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:52.407 19:43:33 -- nvmf/common.sh@117 -- # sync 00:12:52.407 19:43:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.407 19:43:33 -- nvmf/common.sh@120 -- # set +e 00:12:52.407 19:43:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.407 19:43:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.407 rmmod nvme_tcp 00:12:52.407 rmmod nvme_fabrics 00:12:52.407 rmmod nvme_keyring 00:12:52.407 19:43:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.407 19:43:33 -- nvmf/common.sh@124 -- # set -e 00:12:52.407 19:43:33 -- nvmf/common.sh@125 -- # return 0 00:12:52.407 19:43:33 -- nvmf/common.sh@478 -- # '[' -n 1671249 ']' 00:12:52.407 19:43:33 -- nvmf/common.sh@479 -- # killprocess 1671249 00:12:52.407 19:43:33 -- common/autotest_common.sh@936 -- # '[' -z 1671249 ']' 00:12:52.407 19:43:33 -- common/autotest_common.sh@940 -- # kill -0 1671249 00:12:52.407 19:43:33 -- common/autotest_common.sh@941 -- # uname 00:12:52.407 19:43:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.407 19:43:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1671249 00:12:52.407 19:43:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:52.407 19:43:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:52.407 19:43:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1671249' 00:12:52.407 killing process with pid 1671249 00:12:52.407 19:43:33 -- common/autotest_common.sh@955 -- # kill 1671249 00:12:52.407 19:43:33 -- common/autotest_common.sh@960 -- # wait 1671249 00:12:52.665 19:43:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:52.665 19:43:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:52.665 19:43:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:52.665 19:43:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.665 19:43:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.665 19:43:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.665 19:43:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.665 19:43:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.198 19:43:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.198 00:12:55.198 real 0m42.651s 00:12:55.198 user 1m12.095s 00:12:55.198 sys 0m8.420s 00:12:55.198 19:43:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:55.198 19:43:36 -- common/autotest_common.sh@10 -- # set +x 00:12:55.198 ************************************ 00:12:55.198 END TEST nvmf_lvs_grow 00:12:55.198 ************************************ 00:12:55.198 19:43:36 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:55.198 19:43:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:55.198 19:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.198 19:43:36 -- common/autotest_common.sh@10 -- # set +x 00:12:55.198 ************************************ 00:12:55.198 START TEST nvmf_bdev_io_wait 00:12:55.198 ************************************ 00:12:55.198 19:43:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:55.198 * Looking for test storage... 00:12:55.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.198 19:43:36 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.198 19:43:36 -- nvmf/common.sh@7 -- # uname -s 00:12:55.198 19:43:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.198 19:43:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.198 19:43:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.198 19:43:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.198 19:43:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.198 19:43:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.198 19:43:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.198 19:43:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.198 19:43:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.198 19:43:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.198 19:43:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.198 19:43:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.198 19:43:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.198 19:43:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.198 19:43:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.198 19:43:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.198 19:43:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.198 19:43:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.198 19:43:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.198 19:43:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.198 19:43:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.198 19:43:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.198 19:43:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.198 19:43:36 -- paths/export.sh@5 -- # export PATH 00:12:55.198 19:43:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.198 19:43:36 -- nvmf/common.sh@47 -- # : 0 00:12:55.198 19:43:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.198 19:43:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.198 19:43:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.198 19:43:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.198 19:43:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.198 19:43:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.198 19:43:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.198 19:43:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.198 19:43:36 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.199 19:43:36 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.199 19:43:36 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:55.199 19:43:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:55.199 19:43:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.199 19:43:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:55.199 19:43:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:55.199 19:43:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:55.199 19:43:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.199 19:43:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.199 19:43:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.199 19:43:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:55.199 19:43:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:55.199 19:43:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.199 19:43:36 -- common/autotest_common.sh@10 -- # set +x 00:12:57.108 19:43:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:57.108 19:43:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.108 19:43:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.108 19:43:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.108 19:43:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.108 19:43:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.108 19:43:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.108 19:43:38 -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.108 19:43:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.108 19:43:38 -- nvmf/common.sh@296 -- # e810=() 00:12:57.108 19:43:38 -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.108 19:43:38 -- nvmf/common.sh@297 -- # x722=() 00:12:57.108 19:43:38 -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.108 19:43:38 -- nvmf/common.sh@298 -- # mlx=() 00:12:57.108 19:43:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.108 19:43:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.108 19:43:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.109 19:43:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.109 19:43:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.109 19:43:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.109 19:43:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:57.109 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:57.109 19:43:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.109 19:43:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:57.109 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:57.109 19:43:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.109 19:43:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.109 19:43:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.109 19:43:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:57.109 19:43:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.109 19:43:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:57.109 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:57.109 19:43:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.109 19:43:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.109 19:43:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.109 19:43:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:57.109 19:43:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.109 19:43:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:57.109 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:57.109 19:43:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.109 19:43:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:57.109 19:43:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:57.109 19:43:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:57.109 19:43:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.109 19:43:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.109 19:43:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.109 19:43:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.109 19:43:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.109 19:43:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.109 19:43:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.109 19:43:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.109 19:43:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.109 19:43:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.109 19:43:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.109 19:43:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.109 19:43:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.109 19:43:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.109 19:43:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.109 19:43:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.109 19:43:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.109 19:43:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.109 19:43:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.109 19:43:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:12:57.109 00:12:57.109 --- 10.0.0.2 ping statistics --- 00:12:57.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.109 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:12:57.109 19:43:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:12:57.109 00:12:57.109 --- 10.0.0.1 ping statistics --- 00:12:57.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.109 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:57.109 19:43:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.109 19:43:38 -- nvmf/common.sh@411 -- # return 0 00:12:57.109 19:43:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:57.109 19:43:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.109 19:43:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:57.109 19:43:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.109 19:43:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:57.109 19:43:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:57.109 19:43:38 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:57.109 19:43:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:57.109 19:43:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:57.109 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.109 19:43:38 -- nvmf/common.sh@470 -- # nvmfpid=1673794 00:12:57.109 19:43:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:57.109 19:43:38 -- nvmf/common.sh@471 -- # waitforlisten 1673794 00:12:57.109 19:43:38 -- common/autotest_common.sh@817 -- # '[' -z 1673794 ']' 00:12:57.109 19:43:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.109 19:43:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:57.109 19:43:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.109 19:43:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:57.109 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.109 [2024-04-24 19:43:38.552568] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:12:57.109 [2024-04-24 19:43:38.552677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.109 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.109 [2024-04-24 19:43:38.618640] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.368 [2024-04-24 19:43:38.726555] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.368 [2024-04-24 19:43:38.726609] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.368 [2024-04-24 19:43:38.726635] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.368 [2024-04-24 19:43:38.726666] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.368 [2024-04-24 19:43:38.726685] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.368 [2024-04-24 19:43:38.726738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.368 [2024-04-24 19:43:38.726797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.368 [2024-04-24 19:43:38.726862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.368 [2024-04-24 19:43:38.726865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.368 19:43:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:57.368 19:43:38 -- common/autotest_common.sh@850 -- # return 0 00:12:57.368 19:43:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:57.368 19:43:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:57.368 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.368 19:43:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.368 19:43:38 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:57.368 19:43:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.368 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.368 19:43:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.368 19:43:38 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:57.368 19:43:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.368 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.368 19:43:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.368 19:43:38 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:57.368 19:43:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.368 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.368 [2024-04-24 19:43:38.857061] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.368 19:43:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.368 19:43:38 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:57.368 19:43:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.368 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.626 Malloc0 00:12:57.626 19:43:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:57.626 19:43:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.626 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.626 19:43:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:57.626 19:43:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.626 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.626 19:43:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.626 19:43:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.626 19:43:38 -- common/autotest_common.sh@10 -- # set +x 00:12:57.626 [2024-04-24 19:43:38.917242] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.626 19:43:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1673941 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@30 -- # READ_PID=1673942 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:57.626 19:43:38 -- nvmf/common.sh@521 -- # config=() 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1673945 00:12:57.626 19:43:38 -- nvmf/common.sh@521 -- # local subsystem config 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:57.626 19:43:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:57.626 19:43:38 -- nvmf/common.sh@521 -- # config=() 00:12:57.626 19:43:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:57.626 { 00:12:57.626 "params": { 00:12:57.626 "name": "Nvme$subsystem", 00:12:57.626 "trtype": "$TEST_TRANSPORT", 00:12:57.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:57.626 "adrfam": "ipv4", 00:12:57.626 "trsvcid": "$NVMF_PORT", 00:12:57.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:57.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:57.626 "hdgst": ${hdgst:-false}, 00:12:57.626 "ddgst": ${ddgst:-false} 00:12:57.626 }, 00:12:57.626 "method": "bdev_nvme_attach_controller" 00:12:57.626 } 00:12:57.626 EOF 00:12:57.626 )") 00:12:57.626 19:43:38 -- nvmf/common.sh@521 -- # local subsystem config 00:12:57.626 19:43:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1673947 00:12:57.626 19:43:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:57.626 { 00:12:57.626 "params": { 00:12:57.626 "name": "Nvme$subsystem", 00:12:57.626 "trtype": "$TEST_TRANSPORT", 00:12:57.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:57.626 "adrfam": "ipv4", 00:12:57.626 "trsvcid": "$NVMF_PORT", 00:12:57.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:57.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:57.626 "hdgst": ${hdgst:-false}, 00:12:57.626 "ddgst": ${ddgst:-false} 00:12:57.626 }, 00:12:57.626 "method": "bdev_nvme_attach_controller" 00:12:57.626 } 00:12:57.626 EOF 00:12:57.626 )") 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@35 -- # sync 00:12:57.626 19:43:38 -- nvmf/common.sh@521 -- # config=() 00:12:57.626 19:43:38 -- nvmf/common.sh@521 -- # local subsystem config 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:57.626 19:43:38 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:57.626 19:43:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:57.626 19:43:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:57.626 { 00:12:57.626 "params": { 00:12:57.626 "name": "Nvme$subsystem", 00:12:57.626 "trtype": "$TEST_TRANSPORT", 00:12:57.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:57.627 "adrfam": "ipv4", 00:12:57.627 "trsvcid": "$NVMF_PORT", 00:12:57.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:57.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:57.627 "hdgst": ${hdgst:-false}, 00:12:57.627 "ddgst": ${ddgst:-false} 00:12:57.627 }, 00:12:57.627 "method": "bdev_nvme_attach_controller" 00:12:57.627 } 00:12:57.627 EOF 00:12:57.627 )") 00:12:57.627 19:43:38 -- nvmf/common.sh@543 -- # cat 00:12:57.627 19:43:38 -- nvmf/common.sh@521 -- # config=() 00:12:57.627 19:43:38 -- nvmf/common.sh@521 -- # local subsystem config 00:12:57.627 19:43:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:57.627 19:43:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:57.627 { 00:12:57.627 "params": { 00:12:57.627 "name": "Nvme$subsystem", 00:12:57.627 "trtype": "$TEST_TRANSPORT", 00:12:57.627 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:57.627 "adrfam": "ipv4", 00:12:57.627 "trsvcid": "$NVMF_PORT", 00:12:57.627 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:57.627 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:57.627 "hdgst": ${hdgst:-false}, 00:12:57.627 "ddgst": ${ddgst:-false} 00:12:57.627 }, 00:12:57.627 "method": "bdev_nvme_attach_controller" 00:12:57.627 } 00:12:57.627 EOF 00:12:57.627 )") 00:12:57.627 19:43:38 -- nvmf/common.sh@543 -- # cat 00:12:57.627 19:43:38 -- nvmf/common.sh@543 -- # cat 00:12:57.627 19:43:38 -- target/bdev_io_wait.sh@37 -- # wait 1673941 00:12:57.627 19:43:38 -- nvmf/common.sh@543 -- # cat 00:12:57.627 19:43:38 -- nvmf/common.sh@545 -- # jq . 00:12:57.627 19:43:38 -- nvmf/common.sh@545 -- # jq . 00:12:57.627 19:43:38 -- nvmf/common.sh@545 -- # jq . 00:12:57.627 19:43:38 -- nvmf/common.sh@545 -- # jq . 00:12:57.627 19:43:38 -- nvmf/common.sh@546 -- # IFS=, 00:12:57.627 19:43:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:57.627 "params": { 00:12:57.627 "name": "Nvme1", 00:12:57.627 "trtype": "tcp", 00:12:57.627 "traddr": "10.0.0.2", 00:12:57.627 "adrfam": "ipv4", 00:12:57.627 "trsvcid": "4420", 00:12:57.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:57.627 "hdgst": false, 00:12:57.627 "ddgst": false 00:12:57.627 }, 00:12:57.627 "method": "bdev_nvme_attach_controller" 00:12:57.627 }' 00:12:57.627 19:43:38 -- nvmf/common.sh@546 -- # IFS=, 00:12:57.627 19:43:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:57.627 "params": { 00:12:57.627 "name": "Nvme1", 00:12:57.627 "trtype": "tcp", 00:12:57.627 "traddr": "10.0.0.2", 00:12:57.627 "adrfam": "ipv4", 00:12:57.627 "trsvcid": "4420", 00:12:57.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:57.627 "hdgst": false, 00:12:57.627 "ddgst": false 00:12:57.627 }, 00:12:57.627 "method": "bdev_nvme_attach_controller" 00:12:57.627 }' 00:12:57.627 19:43:38 -- nvmf/common.sh@546 -- # IFS=, 00:12:57.627 19:43:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:57.627 "params": { 00:12:57.627 "name": "Nvme1", 00:12:57.627 "trtype": "tcp", 00:12:57.627 "traddr": "10.0.0.2", 00:12:57.627 "adrfam": "ipv4", 00:12:57.627 "trsvcid": "4420", 00:12:57.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:57.627 "hdgst": false, 00:12:57.627 "ddgst": false 00:12:57.627 }, 00:12:57.627 "method": "bdev_nvme_attach_controller" 00:12:57.627 }' 00:12:57.627 19:43:38 -- nvmf/common.sh@546 -- # IFS=, 00:12:57.627 19:43:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:57.627 "params": { 00:12:57.627 "name": "Nvme1", 00:12:57.627 "trtype": "tcp", 00:12:57.627 "traddr": "10.0.0.2", 00:12:57.627 "adrfam": "ipv4", 00:12:57.627 "trsvcid": "4420", 00:12:57.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:57.627 "hdgst": false, 00:12:57.627 "ddgst": false 00:12:57.627 }, 00:12:57.627 "method": "bdev_nvme_attach_controller" 00:12:57.627 }' 00:12:57.627 [2024-04-24 19:43:38.963833] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:12:57.627 [2024-04-24 19:43:38.963834] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:12:57.627 [2024-04-24 19:43:38.963844] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:12:57.627 [2024-04-24 19:43:38.963844] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:12:57.627 [2024-04-24 19:43:38.963924] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 19:43:38.963925] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 19:43:38.963925] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-24 19:43:38.963926] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:57.627 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:57.627 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:57.627 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:57.627 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.627 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.627 [2024-04-24 19:43:39.137230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.885 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.885 [2024-04-24 19:43:39.234187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:57.885 [2024-04-24 19:43:39.235292] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.885 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.885 [2024-04-24 19:43:39.332777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:57.885 [2024-04-24 19:43:39.335766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.143 [2024-04-24 19:43:39.407257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.143 [2024-04-24 19:43:39.436128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:58.143 [2024-04-24 19:43:39.502164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:58.143 Running I/O for 1 seconds... 00:12:58.143 Running I/O for 1 seconds... 00:12:58.143 Running I/O for 1 seconds... 00:12:58.143 Running I/O for 1 seconds... 00:12:59.078 00:12:59.078 Latency(us) 00:12:59.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.078 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:59.078 Nvme1n1 : 1.01 9774.72 38.18 0.00 0.00 13033.42 8786.68 21456.97 00:12:59.078 =================================================================================================================== 00:12:59.078 Total : 9774.72 38.18 0.00 0.00 13033.42 8786.68 21456.97 00:12:59.336 00:12:59.336 Latency(us) 00:12:59.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.336 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:59.336 Nvme1n1 : 1.01 9160.09 35.78 0.00 0.00 13916.51 4757.43 21359.88 00:12:59.336 =================================================================================================================== 00:12:59.336 Total : 9160.09 35.78 0.00 0.00 13916.51 4757.43 21359.88 00:12:59.336 00:12:59.336 Latency(us) 00:12:59.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.336 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:59.336 Nvme1n1 : 1.00 190942.14 745.87 0.00 0.00 667.80 256.38 849.54 00:12:59.336 =================================================================================================================== 00:12:59.336 Total : 190942.14 745.87 0.00 0.00 667.80 256.38 849.54 00:12:59.336 00:12:59.336 Latency(us) 00:12:59.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.336 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:59.336 Nvme1n1 : 1.01 9234.90 36.07 0.00 0.00 13800.20 8107.05 25631.86 00:12:59.336 =================================================================================================================== 00:12:59.336 Total : 9234.90 36.07 0.00 0.00 13800.20 8107.05 25631.86 00:12:59.594 19:43:40 -- target/bdev_io_wait.sh@38 -- # wait 1673942 00:12:59.595 19:43:40 -- target/bdev_io_wait.sh@39 -- # wait 1673945 00:12:59.595 19:43:40 -- target/bdev_io_wait.sh@40 -- # wait 1673947 00:12:59.595 19:43:40 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.595 19:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:59.595 19:43:40 -- common/autotest_common.sh@10 -- # set +x 00:12:59.595 19:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:59.595 19:43:40 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:59.595 19:43:40 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:59.595 19:43:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:59.595 19:43:40 -- nvmf/common.sh@117 -- # sync 00:12:59.595 19:43:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.595 19:43:40 -- nvmf/common.sh@120 -- # set +e 00:12:59.595 19:43:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.595 19:43:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.595 rmmod nvme_tcp 00:12:59.595 rmmod nvme_fabrics 00:12:59.595 rmmod nvme_keyring 00:12:59.595 19:43:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.595 19:43:40 -- nvmf/common.sh@124 -- # set -e 00:12:59.595 19:43:40 -- nvmf/common.sh@125 -- # return 0 00:12:59.595 19:43:40 -- nvmf/common.sh@478 -- # '[' -n 1673794 ']' 00:12:59.595 19:43:40 -- nvmf/common.sh@479 -- # killprocess 1673794 00:12:59.595 19:43:40 -- common/autotest_common.sh@936 -- # '[' -z 1673794 ']' 00:12:59.595 19:43:40 -- common/autotest_common.sh@940 -- # kill -0 1673794 00:12:59.595 19:43:40 -- common/autotest_common.sh@941 -- # uname 00:12:59.595 19:43:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:59.595 19:43:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1673794 00:12:59.595 19:43:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:59.595 19:43:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:59.595 19:43:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1673794' 00:12:59.595 killing process with pid 1673794 00:12:59.595 19:43:41 -- common/autotest_common.sh@955 -- # kill 1673794 00:12:59.595 19:43:41 -- common/autotest_common.sh@960 -- # wait 1673794 00:12:59.853 19:43:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:59.853 19:43:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:59.853 19:43:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:59.853 19:43:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.853 19:43:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.853 19:43:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.853 19:43:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.853 19:43:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.388 19:43:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:02.388 00:13:02.388 real 0m7.035s 00:13:02.388 user 0m15.805s 00:13:02.388 sys 0m3.477s 00:13:02.388 19:43:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.388 19:43:43 -- common/autotest_common.sh@10 -- # set +x 00:13:02.388 ************************************ 00:13:02.388 END TEST nvmf_bdev_io_wait 00:13:02.388 ************************************ 00:13:02.388 19:43:43 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:02.388 19:43:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:02.388 19:43:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.388 19:43:43 -- common/autotest_common.sh@10 -- # set +x 00:13:02.388 ************************************ 00:13:02.388 START TEST nvmf_queue_depth 00:13:02.388 ************************************ 00:13:02.388 19:43:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:02.388 * Looking for test storage... 00:13:02.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.388 19:43:43 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.388 19:43:43 -- nvmf/common.sh@7 -- # uname -s 00:13:02.388 19:43:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.388 19:43:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.388 19:43:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.388 19:43:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.388 19:43:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.388 19:43:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.388 19:43:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.388 19:43:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.388 19:43:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.388 19:43:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.388 19:43:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:02.388 19:43:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:02.388 19:43:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.388 19:43:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.388 19:43:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.388 19:43:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.388 19:43:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.388 19:43:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.388 19:43:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.388 19:43:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.388 19:43:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.388 19:43:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.388 19:43:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.388 19:43:43 -- paths/export.sh@5 -- # export PATH 00:13:02.388 19:43:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.388 19:43:43 -- nvmf/common.sh@47 -- # : 0 00:13:02.388 19:43:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.388 19:43:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.388 19:43:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.388 19:43:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.388 19:43:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.388 19:43:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.388 19:43:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.388 19:43:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.388 19:43:43 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:02.388 19:43:43 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:02.388 19:43:43 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:02.388 19:43:43 -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:02.388 19:43:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:02.388 19:43:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.388 19:43:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:02.388 19:43:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:02.388 19:43:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:02.388 19:43:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.388 19:43:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.388 19:43:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.388 19:43:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:02.389 19:43:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:02.389 19:43:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:02.389 19:43:43 -- common/autotest_common.sh@10 -- # set +x 00:13:04.293 19:43:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:04.293 19:43:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.293 19:43:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.293 19:43:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.293 19:43:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.293 19:43:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.293 19:43:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.293 19:43:45 -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.293 19:43:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.293 19:43:45 -- nvmf/common.sh@296 -- # e810=() 00:13:04.293 19:43:45 -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.293 19:43:45 -- nvmf/common.sh@297 -- # x722=() 00:13:04.293 19:43:45 -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.293 19:43:45 -- nvmf/common.sh@298 -- # mlx=() 00:13:04.293 19:43:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.293 19:43:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.293 19:43:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.293 19:43:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.293 19:43:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.293 19:43:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.293 19:43:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.293 19:43:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.293 19:43:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.294 19:43:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.294 19:43:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.294 19:43:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.294 19:43:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.294 19:43:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.294 19:43:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.294 19:43:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.294 19:43:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:04.294 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:04.294 19:43:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.294 19:43:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:04.294 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:04.294 19:43:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:04.294 19:43:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.294 19:43:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.294 19:43:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:04.294 19:43:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.294 19:43:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:04.294 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:04.294 19:43:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.294 19:43:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.294 19:43:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.294 19:43:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:04.294 19:43:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.294 19:43:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:04.294 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:04.294 19:43:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.294 19:43:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:04.294 19:43:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:04.294 19:43:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:04.294 19:43:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.294 19:43:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.294 19:43:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.294 19:43:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:04.294 19:43:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.294 19:43:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.294 19:43:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:04.294 19:43:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.294 19:43:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.294 19:43:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:04.294 19:43:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:04.294 19:43:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.294 19:43:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.294 19:43:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.294 19:43:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.294 19:43:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:04.294 19:43:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.294 19:43:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.294 19:43:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.294 19:43:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:04.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:13:04.294 00:13:04.294 --- 10.0.0.2 ping statistics --- 00:13:04.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.294 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:13:04.294 19:43:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:13:04.294 00:13:04.294 --- 10.0.0.1 ping statistics --- 00:13:04.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.294 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:04.294 19:43:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.294 19:43:45 -- nvmf/common.sh@411 -- # return 0 00:13:04.294 19:43:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:04.294 19:43:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.294 19:43:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:04.294 19:43:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.294 19:43:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:04.294 19:43:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:04.294 19:43:45 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:04.294 19:43:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:04.294 19:43:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:04.294 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:04.294 19:43:45 -- nvmf/common.sh@470 -- # nvmfpid=1676163 00:13:04.294 19:43:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:04.294 19:43:45 -- nvmf/common.sh@471 -- # waitforlisten 1676163 00:13:04.294 19:43:45 -- common/autotest_common.sh@817 -- # '[' -z 1676163 ']' 00:13:04.294 19:43:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.294 19:43:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:04.294 19:43:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.294 19:43:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:04.294 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:04.294 [2024-04-24 19:43:45.555813] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:13:04.294 [2024-04-24 19:43:45.555879] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.294 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.294 [2024-04-24 19:43:45.623686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.294 [2024-04-24 19:43:45.738456] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.294 [2024-04-24 19:43:45.738519] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.294 [2024-04-24 19:43:45.738544] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.294 [2024-04-24 19:43:45.738558] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.294 [2024-04-24 19:43:45.738569] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.294 [2024-04-24 19:43:45.738619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.553 19:43:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:04.554 19:43:45 -- common/autotest_common.sh@850 -- # return 0 00:13:04.554 19:43:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:04.554 19:43:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:04.554 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:04.554 19:43:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.554 19:43:45 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.554 19:43:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.554 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:04.554 [2024-04-24 19:43:45.880526] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.554 19:43:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.554 19:43:45 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:04.554 19:43:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.554 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:04.554 Malloc0 00:13:04.554 19:43:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.554 19:43:45 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:04.554 19:43:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.554 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:04.554 19:43:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.554 19:43:45 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:04.554 19:43:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.554 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:04.554 19:43:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.554 19:43:45 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.554 19:43:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.554 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:04.554 [2024-04-24 19:43:45.949409] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.554 19:43:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.554 19:43:45 -- target/queue_depth.sh@30 -- # bdevperf_pid=1676192 00:13:04.554 19:43:45 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:04.554 19:43:45 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:04.554 19:43:45 -- target/queue_depth.sh@33 -- # waitforlisten 1676192 /var/tmp/bdevperf.sock 00:13:04.554 19:43:45 -- common/autotest_common.sh@817 -- # '[' -z 1676192 ']' 00:13:04.554 19:43:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:04.554 19:43:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:04.554 19:43:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:04.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:04.554 19:43:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:04.554 19:43:45 -- common/autotest_common.sh@10 -- # set +x 00:13:04.554 [2024-04-24 19:43:45.994596] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:13:04.554 [2024-04-24 19:43:45.994705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1676192 ] 00:13:04.554 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.554 [2024-04-24 19:43:46.055423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.812 [2024-04-24 19:43:46.170948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.747 19:43:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:05.747 19:43:46 -- common/autotest_common.sh@850 -- # return 0 00:13:05.747 19:43:46 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:05.747 19:43:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.747 19:43:46 -- common/autotest_common.sh@10 -- # set +x 00:13:05.747 NVMe0n1 00:13:05.747 19:43:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.747 19:43:47 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:05.747 Running I/O for 10 seconds... 00:13:18.026 00:13:18.026 Latency(us) 00:13:18.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.026 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:18.026 Verification LBA range: start 0x0 length 0x4000 00:13:18.026 NVMe0n1 : 10.07 8345.98 32.60 0.00 0.00 122172.07 14757.74 77672.30 00:13:18.026 =================================================================================================================== 00:13:18.026 Total : 8345.98 32.60 0.00 0.00 122172.07 14757.74 77672.30 00:13:18.026 0 00:13:18.026 19:43:57 -- target/queue_depth.sh@39 -- # killprocess 1676192 00:13:18.026 19:43:57 -- common/autotest_common.sh@936 -- # '[' -z 1676192 ']' 00:13:18.026 19:43:57 -- common/autotest_common.sh@940 -- # kill -0 1676192 00:13:18.026 19:43:57 -- common/autotest_common.sh@941 -- # uname 00:13:18.026 19:43:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.026 19:43:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1676192 00:13:18.026 19:43:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:18.026 19:43:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:18.026 19:43:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1676192' 00:13:18.026 killing process with pid 1676192 00:13:18.026 19:43:57 -- common/autotest_common.sh@955 -- # kill 1676192 00:13:18.026 Received shutdown signal, test time was about 10.000000 seconds 00:13:18.026 00:13:18.026 Latency(us) 00:13:18.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.026 =================================================================================================================== 00:13:18.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.026 19:43:57 -- common/autotest_common.sh@960 -- # wait 1676192 00:13:18.026 19:43:57 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:18.026 19:43:57 -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:18.027 19:43:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:18.027 19:43:57 -- nvmf/common.sh@117 -- # sync 00:13:18.027 19:43:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.027 19:43:57 -- nvmf/common.sh@120 -- # set +e 00:13:18.027 19:43:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.027 19:43:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.027 rmmod nvme_tcp 00:13:18.027 rmmod nvme_fabrics 00:13:18.027 rmmod nvme_keyring 00:13:18.027 19:43:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.027 19:43:57 -- nvmf/common.sh@124 -- # set -e 00:13:18.027 19:43:57 -- nvmf/common.sh@125 -- # return 0 00:13:18.027 19:43:57 -- nvmf/common.sh@478 -- # '[' -n 1676163 ']' 00:13:18.027 19:43:57 -- nvmf/common.sh@479 -- # killprocess 1676163 00:13:18.027 19:43:57 -- common/autotest_common.sh@936 -- # '[' -z 1676163 ']' 00:13:18.027 19:43:57 -- common/autotest_common.sh@940 -- # kill -0 1676163 00:13:18.027 19:43:57 -- common/autotest_common.sh@941 -- # uname 00:13:18.027 19:43:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.027 19:43:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1676163 00:13:18.027 19:43:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:18.027 19:43:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:18.027 19:43:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1676163' 00:13:18.027 killing process with pid 1676163 00:13:18.027 19:43:57 -- common/autotest_common.sh@955 -- # kill 1676163 00:13:18.027 19:43:57 -- common/autotest_common.sh@960 -- # wait 1676163 00:13:18.027 19:43:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:18.027 19:43:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:18.027 19:43:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:18.027 19:43:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.027 19:43:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.027 19:43:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.027 19:43:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.027 19:43:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.595 19:44:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.595 00:13:18.595 real 0m16.587s 00:13:18.595 user 0m24.148s 00:13:18.595 sys 0m2.921s 00:13:18.595 19:44:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.595 19:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.595 ************************************ 00:13:18.595 END TEST nvmf_queue_depth 00:13:18.595 ************************************ 00:13:18.595 19:44:00 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:18.595 19:44:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:18.595 19:44:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.595 19:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.853 ************************************ 00:13:18.853 START TEST nvmf_multipath 00:13:18.853 ************************************ 00:13:18.853 19:44:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:18.853 * Looking for test storage... 00:13:18.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.853 19:44:00 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.853 19:44:00 -- nvmf/common.sh@7 -- # uname -s 00:13:18.853 19:44:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.853 19:44:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.853 19:44:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.853 19:44:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.853 19:44:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.853 19:44:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.854 19:44:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.854 19:44:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.854 19:44:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.854 19:44:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.854 19:44:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.854 19:44:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.854 19:44:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.854 19:44:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.854 19:44:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.854 19:44:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.854 19:44:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.854 19:44:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.854 19:44:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.854 19:44:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.854 19:44:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.854 19:44:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.854 19:44:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.854 19:44:00 -- paths/export.sh@5 -- # export PATH 00:13:18.854 19:44:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.854 19:44:00 -- nvmf/common.sh@47 -- # : 0 00:13:18.854 19:44:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.854 19:44:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.854 19:44:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.854 19:44:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.854 19:44:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.854 19:44:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.854 19:44:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.854 19:44:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.854 19:44:00 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.854 19:44:00 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.854 19:44:00 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:18.854 19:44:00 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.854 19:44:00 -- target/multipath.sh@43 -- # nvmftestinit 00:13:18.854 19:44:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:18.854 19:44:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.854 19:44:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:18.854 19:44:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:18.854 19:44:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:18.854 19:44:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.854 19:44:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.854 19:44:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.854 19:44:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:18.854 19:44:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:18.854 19:44:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.854 19:44:00 -- common/autotest_common.sh@10 -- # set +x 00:13:20.755 19:44:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:20.755 19:44:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.755 19:44:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.755 19:44:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.755 19:44:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.755 19:44:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.755 19:44:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.755 19:44:02 -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.755 19:44:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.755 19:44:02 -- nvmf/common.sh@296 -- # e810=() 00:13:20.755 19:44:02 -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.755 19:44:02 -- nvmf/common.sh@297 -- # x722=() 00:13:20.755 19:44:02 -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.755 19:44:02 -- nvmf/common.sh@298 -- # mlx=() 00:13:20.755 19:44:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.755 19:44:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.755 19:44:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.755 19:44:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.755 19:44:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.755 19:44:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.755 19:44:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:20.755 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:20.755 19:44:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.755 19:44:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:20.755 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:20.755 19:44:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.755 19:44:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.755 19:44:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.755 19:44:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:20.755 19:44:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.755 19:44:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:20.755 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:20.755 19:44:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.755 19:44:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.755 19:44:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.755 19:44:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:20.755 19:44:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.755 19:44:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:20.755 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:20.755 19:44:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.755 19:44:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:20.755 19:44:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:20.755 19:44:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:20.755 19:44:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:20.755 19:44:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.756 19:44:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.756 19:44:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.756 19:44:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.756 19:44:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.756 19:44:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.756 19:44:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.756 19:44:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.756 19:44:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.756 19:44:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.756 19:44:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.756 19:44:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.756 19:44:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.756 19:44:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.756 19:44:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.756 19:44:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.014 19:44:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.014 19:44:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.014 19:44:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.014 19:44:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:13:21.014 00:13:21.014 --- 10.0.0.2 ping statistics --- 00:13:21.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.014 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:13:21.014 19:44:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:13:21.014 00:13:21.014 --- 10.0.0.1 ping statistics --- 00:13:21.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.014 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:21.014 19:44:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.014 19:44:02 -- nvmf/common.sh@411 -- # return 0 00:13:21.014 19:44:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:21.014 19:44:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.014 19:44:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:21.014 19:44:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:21.014 19:44:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.014 19:44:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:21.014 19:44:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:21.014 19:44:02 -- target/multipath.sh@45 -- # '[' -z ']' 00:13:21.014 19:44:02 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:21.014 only one NIC for nvmf test 00:13:21.014 19:44:02 -- target/multipath.sh@47 -- # nvmftestfini 00:13:21.014 19:44:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:21.014 19:44:02 -- nvmf/common.sh@117 -- # sync 00:13:21.014 19:44:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:21.014 19:44:02 -- nvmf/common.sh@120 -- # set +e 00:13:21.014 19:44:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:21.014 19:44:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:21.014 rmmod nvme_tcp 00:13:21.014 rmmod nvme_fabrics 00:13:21.014 rmmod nvme_keyring 00:13:21.014 19:44:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.014 19:44:02 -- nvmf/common.sh@124 -- # set -e 00:13:21.014 19:44:02 -- nvmf/common.sh@125 -- # return 0 00:13:21.014 19:44:02 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:21.014 19:44:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:21.014 19:44:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:21.014 19:44:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:21.014 19:44:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.014 19:44:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.014 19:44:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.014 19:44:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.014 19:44:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.549 19:44:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.549 19:44:04 -- target/multipath.sh@48 -- # exit 0 00:13:23.549 19:44:04 -- target/multipath.sh@1 -- # nvmftestfini 00:13:23.549 19:44:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:23.549 19:44:04 -- nvmf/common.sh@117 -- # sync 00:13:23.549 19:44:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:23.549 19:44:04 -- nvmf/common.sh@120 -- # set +e 00:13:23.549 19:44:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:23.549 19:44:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:23.549 19:44:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:23.549 19:44:04 -- nvmf/common.sh@124 -- # set -e 00:13:23.549 19:44:04 -- nvmf/common.sh@125 -- # return 0 00:13:23.549 19:44:04 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:13:23.549 19:44:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:23.549 19:44:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:23.549 19:44:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:23.549 19:44:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:23.549 19:44:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:23.549 19:44:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.549 19:44:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.549 19:44:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.549 19:44:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.549 00:13:23.549 real 0m4.318s 00:13:23.549 user 0m0.819s 00:13:23.549 sys 0m1.490s 00:13:23.549 19:44:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:23.549 19:44:04 -- common/autotest_common.sh@10 -- # set +x 00:13:23.549 ************************************ 00:13:23.549 END TEST nvmf_multipath 00:13:23.549 ************************************ 00:13:23.549 19:44:04 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:23.549 19:44:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:23.549 19:44:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.549 19:44:04 -- common/autotest_common.sh@10 -- # set +x 00:13:23.549 ************************************ 00:13:23.549 START TEST nvmf_zcopy 00:13:23.549 ************************************ 00:13:23.549 19:44:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:23.549 * Looking for test storage... 00:13:23.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.549 19:44:04 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.549 19:44:04 -- nvmf/common.sh@7 -- # uname -s 00:13:23.549 19:44:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.549 19:44:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.549 19:44:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.549 19:44:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.549 19:44:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.549 19:44:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.549 19:44:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.549 19:44:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.549 19:44:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.549 19:44:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.549 19:44:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.549 19:44:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.549 19:44:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.549 19:44:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.549 19:44:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.549 19:44:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.549 19:44:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.549 19:44:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.549 19:44:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.549 19:44:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.550 19:44:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.550 19:44:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.550 19:44:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.550 19:44:04 -- paths/export.sh@5 -- # export PATH 00:13:23.550 19:44:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.550 19:44:04 -- nvmf/common.sh@47 -- # : 0 00:13:23.550 19:44:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.550 19:44:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.550 19:44:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.550 19:44:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.550 19:44:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.550 19:44:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.550 19:44:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.550 19:44:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.550 19:44:04 -- target/zcopy.sh@12 -- # nvmftestinit 00:13:23.550 19:44:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:23.550 19:44:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.550 19:44:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:23.550 19:44:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:23.550 19:44:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:23.550 19:44:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.550 19:44:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.550 19:44:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.550 19:44:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:23.550 19:44:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:23.550 19:44:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:23.550 19:44:04 -- common/autotest_common.sh@10 -- # set +x 00:13:25.451 19:44:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:25.451 19:44:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:25.451 19:44:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:25.451 19:44:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:25.451 19:44:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:25.451 19:44:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:25.451 19:44:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:25.451 19:44:06 -- nvmf/common.sh@295 -- # net_devs=() 00:13:25.451 19:44:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:25.451 19:44:06 -- nvmf/common.sh@296 -- # e810=() 00:13:25.451 19:44:06 -- nvmf/common.sh@296 -- # local -ga e810 00:13:25.451 19:44:06 -- nvmf/common.sh@297 -- # x722=() 00:13:25.451 19:44:06 -- nvmf/common.sh@297 -- # local -ga x722 00:13:25.451 19:44:06 -- nvmf/common.sh@298 -- # mlx=() 00:13:25.451 19:44:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:25.451 19:44:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.451 19:44:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:25.451 19:44:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:25.451 19:44:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:25.451 19:44:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:25.451 19:44:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:25.451 19:44:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:25.451 19:44:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.451 19:44:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:25.451 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:25.451 19:44:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.451 19:44:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.451 19:44:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.452 19:44:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:25.452 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:25.452 19:44:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:25.452 19:44:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.452 19:44:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.452 19:44:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:25.452 19:44:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.452 19:44:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:25.452 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:25.452 19:44:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.452 19:44:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.452 19:44:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.452 19:44:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:25.452 19:44:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.452 19:44:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:25.452 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:25.452 19:44:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.452 19:44:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:25.452 19:44:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:25.452 19:44:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:25.452 19:44:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.452 19:44:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.452 19:44:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.452 19:44:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:25.452 19:44:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.452 19:44:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.452 19:44:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:25.452 19:44:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.452 19:44:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.452 19:44:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:25.452 19:44:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:25.452 19:44:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.452 19:44:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.452 19:44:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.452 19:44:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.452 19:44:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:25.452 19:44:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.452 19:44:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.452 19:44:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.452 19:44:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:25.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:13:25.452 00:13:25.452 --- 10.0.0.2 ping statistics --- 00:13:25.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.452 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:25.452 19:44:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:13:25.452 00:13:25.452 --- 10.0.0.1 ping statistics --- 00:13:25.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.452 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:25.452 19:44:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.452 19:44:06 -- nvmf/common.sh@411 -- # return 0 00:13:25.452 19:44:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:25.452 19:44:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.452 19:44:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:25.452 19:44:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.452 19:44:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:25.452 19:44:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:25.452 19:44:06 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:25.452 19:44:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:25.452 19:44:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:25.452 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:13:25.452 19:44:06 -- nvmf/common.sh@470 -- # nvmfpid=1681390 00:13:25.452 19:44:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:25.452 19:44:06 -- nvmf/common.sh@471 -- # waitforlisten 1681390 00:13:25.452 19:44:06 -- common/autotest_common.sh@817 -- # '[' -z 1681390 ']' 00:13:25.452 19:44:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.452 19:44:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:25.452 19:44:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.452 19:44:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:25.452 19:44:06 -- common/autotest_common.sh@10 -- # set +x 00:13:25.452 [2024-04-24 19:44:06.788490] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:13:25.452 [2024-04-24 19:44:06.788576] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.452 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.452 [2024-04-24 19:44:06.857149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.710 [2024-04-24 19:44:06.980160] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.710 [2024-04-24 19:44:06.980229] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.710 [2024-04-24 19:44:06.980245] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.710 [2024-04-24 19:44:06.980259] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.710 [2024-04-24 19:44:06.980271] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.710 [2024-04-24 19:44:06.980320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.710 19:44:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:25.710 19:44:07 -- common/autotest_common.sh@850 -- # return 0 00:13:25.710 19:44:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:25.710 19:44:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:25.710 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.710 19:44:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.710 19:44:07 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:25.710 19:44:07 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:25.710 19:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.710 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.711 [2024-04-24 19:44:07.127473] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.711 19:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.711 19:44:07 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:25.711 19:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.711 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.711 19:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.711 19:44:07 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.711 19:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.711 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.711 [2024-04-24 19:44:07.143700] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.711 19:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.711 19:44:07 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:25.711 19:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.711 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.711 19:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.711 19:44:07 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:25.711 19:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.711 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.711 malloc0 00:13:25.711 19:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.711 19:44:07 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:25.711 19:44:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:25.711 19:44:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.711 19:44:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:25.711 19:44:07 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:25.711 19:44:07 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:25.711 19:44:07 -- nvmf/common.sh@521 -- # config=() 00:13:25.711 19:44:07 -- nvmf/common.sh@521 -- # local subsystem config 00:13:25.711 19:44:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:25.711 19:44:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:25.711 { 00:13:25.711 "params": { 00:13:25.711 "name": "Nvme$subsystem", 00:13:25.711 "trtype": "$TEST_TRANSPORT", 00:13:25.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:25.711 "adrfam": "ipv4", 00:13:25.711 "trsvcid": "$NVMF_PORT", 00:13:25.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:25.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:25.711 "hdgst": ${hdgst:-false}, 00:13:25.711 "ddgst": ${ddgst:-false} 00:13:25.711 }, 00:13:25.711 "method": "bdev_nvme_attach_controller" 00:13:25.711 } 00:13:25.711 EOF 00:13:25.711 )") 00:13:25.711 19:44:07 -- nvmf/common.sh@543 -- # cat 00:13:25.711 19:44:07 -- nvmf/common.sh@545 -- # jq . 00:13:25.711 19:44:07 -- nvmf/common.sh@546 -- # IFS=, 00:13:25.711 19:44:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:25.711 "params": { 00:13:25.711 "name": "Nvme1", 00:13:25.711 "trtype": "tcp", 00:13:25.711 "traddr": "10.0.0.2", 00:13:25.711 "adrfam": "ipv4", 00:13:25.711 "trsvcid": "4420", 00:13:25.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:25.711 "hdgst": false, 00:13:25.711 "ddgst": false 00:13:25.711 }, 00:13:25.711 "method": "bdev_nvme_attach_controller" 00:13:25.711 }' 00:13:25.969 [2024-04-24 19:44:07.225518] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:13:25.969 [2024-04-24 19:44:07.225603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1681537 ] 00:13:25.969 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.969 [2024-04-24 19:44:07.294873] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.969 [2024-04-24 19:44:07.418092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.228 Running I/O for 10 seconds... 00:13:38.434 00:13:38.434 Latency(us) 00:13:38.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.434 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:38.434 Verification LBA range: start 0x0 length 0x1000 00:13:38.434 Nvme1n1 : 10.02 5834.97 45.59 0.00 0.00 21877.39 3495.25 33981.63 00:13:38.434 =================================================================================================================== 00:13:38.434 Total : 5834.97 45.59 0.00 0.00 21877.39 3495.25 33981.63 00:13:38.434 19:44:18 -- target/zcopy.sh@39 -- # perfpid=1682726 00:13:38.434 19:44:18 -- target/zcopy.sh@41 -- # xtrace_disable 00:13:38.434 19:44:18 -- common/autotest_common.sh@10 -- # set +x 00:13:38.434 19:44:18 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:38.435 19:44:18 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:38.435 19:44:18 -- nvmf/common.sh@521 -- # config=() 00:13:38.435 19:44:18 -- nvmf/common.sh@521 -- # local subsystem config 00:13:38.435 19:44:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:38.435 19:44:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:38.435 { 00:13:38.435 "params": { 00:13:38.435 "name": "Nvme$subsystem", 00:13:38.435 "trtype": "$TEST_TRANSPORT", 00:13:38.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:38.435 "adrfam": "ipv4", 00:13:38.435 "trsvcid": "$NVMF_PORT", 00:13:38.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:38.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:38.435 "hdgst": ${hdgst:-false}, 00:13:38.435 "ddgst": ${ddgst:-false} 00:13:38.435 }, 00:13:38.435 "method": "bdev_nvme_attach_controller" 00:13:38.435 } 00:13:38.435 EOF 00:13:38.435 )") 00:13:38.435 19:44:18 -- nvmf/common.sh@543 -- # cat 00:13:38.435 [2024-04-24 19:44:18.050203] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.050253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 19:44:18 -- nvmf/common.sh@545 -- # jq . 00:13:38.435 19:44:18 -- nvmf/common.sh@546 -- # IFS=, 00:13:38.435 19:44:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:38.435 "params": { 00:13:38.435 "name": "Nvme1", 00:13:38.435 "trtype": "tcp", 00:13:38.435 "traddr": "10.0.0.2", 00:13:38.435 "adrfam": "ipv4", 00:13:38.435 "trsvcid": "4420", 00:13:38.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.435 "hdgst": false, 00:13:38.435 "ddgst": false 00:13:38.435 }, 00:13:38.435 "method": "bdev_nvme_attach_controller" 00:13:38.435 }' 00:13:38.435 [2024-04-24 19:44:18.058153] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.058181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.066174] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.066202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.074200] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.074227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.082221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.082246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.089720] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:13:38.435 [2024-04-24 19:44:18.089791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1682726 ] 00:13:38.435 [2024-04-24 19:44:18.090243] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.090269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.098265] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.098291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.106285] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.106311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.114308] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.114332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.435 [2024-04-24 19:44:18.122329] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.122354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.130351] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.130376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.138371] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.138396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.146392] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.146425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.154414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.154440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.155987] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.435 [2024-04-24 19:44:18.162464] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.162494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.170497] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.170540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.178482] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.178507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.186507] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.186533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.194527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.194552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.202549] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.202575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.210567] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.210592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.218591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.218617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.226646] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.226698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.234645] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.234687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.242665] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.242703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.250705] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.250729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.258718] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.258741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.266730] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.266753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.274748] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.274770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.276778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.435 [2024-04-24 19:44:18.282765] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.282787] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.290785] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.290814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.298822] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.298854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.306847] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.306882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.314882] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.314937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.322899] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.322957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.330930] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.330974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.338947] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.339001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.346952] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.346973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.355016] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.355057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.363042] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.363086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.371033] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.371067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.379050] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.435 [2024-04-24 19:44:18.379075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.435 [2024-04-24 19:44:18.387073] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.387098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.395095] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.395119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.403128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.403167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.411152] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.411180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.419168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.419194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.427190] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.427219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.435213] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.435241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.443237] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.443272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.451258] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.451284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.459296] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.459327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.467307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.467333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 Running I/O for 5 seconds... 00:13:38.436 [2024-04-24 19:44:18.475329] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.475355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.490128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.490162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.500724] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.500753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.513518] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.513550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.525136] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.525168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.536926] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.536973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.548397] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.548428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.559777] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.559805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.570713] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.570741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.581618] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.581659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.592552] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.592582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.603917] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.603962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.615542] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.615573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.626490] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.626520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.637724] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.637753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.649254] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.649284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.660560] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.660590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.671652] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.671707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.682774] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.682802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.694341] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.694368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.705194] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.705224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.718161] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.718190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.728261] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.728289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.740092] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.740120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.751149] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.751178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.761468] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.761496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.772437] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.772464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.783120] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.783147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.794063] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.794091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.804922] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.804949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.815500] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.815527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.825597] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.825625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.835884] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.835911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.846281] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.846309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.857033] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.857061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.869089] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.869117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.878300] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.878328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.889470] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.889498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.901695] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.436 [2024-04-24 19:44:18.901722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.436 [2024-04-24 19:44:18.910959] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:18.910987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:18.921936] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:18.921964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:18.932604] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:18.932639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:18.943447] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:18.943474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:18.954089] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:18.954117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:18.964570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:18.964598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:18.975098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:18.975125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:18.986075] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:18.986102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:18.997021] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:18.997049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.007745] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.007772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.018336] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.018364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.030933] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.030961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.040072] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.040100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.051543] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.051571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.064271] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.064299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.073330] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.073358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.084377] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.084405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.094742] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.094770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.105591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.105619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.115935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.115962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.126588] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.126615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.137016] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.137045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.150045] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.150073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.161054] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.161082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.169975] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.170004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.181216] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.181244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.191672] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.191700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.202211] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.202239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.213406] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.213434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.224067] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.224095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.235639] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.235667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.246833] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.246861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.258209] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.258249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.268940] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.268968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.281846] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.281875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.291568] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.291595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.302977] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.303005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.314023] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.314051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.325068] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.325095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.335695] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.335723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.346045] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.346072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.356577] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.356605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.367439] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.367467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.378532] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.378560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.389467] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.389496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.402244] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.402271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.412656] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.412684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.423427] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.423455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.435893] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.435921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.445343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.445371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.457148] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.457176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.437 [2024-04-24 19:44:19.469872] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.437 [2024-04-24 19:44:19.469905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.479823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.479850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.491641] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.491682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.502741] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.502769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.514117] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.514150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.524845] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.524873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.537509] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.537536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.546719] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.546747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.558233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.558265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.569043] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.569073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.580083] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.580113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.591098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.591127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.602437] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.602466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.613593] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.613622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.624964] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.624993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.636390] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.636420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.649462] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.649491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.659415] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.659443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.671321] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.671350] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.682177] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.682213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.693250] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.693279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.704390] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.704419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.715098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.715126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.727458] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.727486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.737331] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.737359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.749118] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.749146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.761552] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.761579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.770619] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.770654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.781987] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.782014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.792792] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.792820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.803456] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.803483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.814238] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.814265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.825182] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.825209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.835575] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.835603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.846096] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.846123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.858219] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.858246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.867678] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.867706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.879093] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.879121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.891552] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.891585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.900890] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.900919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.911837] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.911866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.922569] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.922597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.932768] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.932795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.438 [2024-04-24 19:44:19.943084] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.438 [2024-04-24 19:44:19.943112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:19.953457] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:19.953485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:19.963944] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:19.963972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:19.974975] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:19.975004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:19.985876] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:19.985919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:19.996933] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:19.996977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.009779] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.009808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.019393] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.019423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.030863] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.030892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.042153] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.042187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.053269] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.053297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.063811] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.063839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.075618] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.075690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.088475] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.088507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.099822] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.099859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.111318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.111348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.122890] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.122935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.134295] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.134324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.146179] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.146209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.158093] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.158123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.169365] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.169394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.179820] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.179848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.697 [2024-04-24 19:44:20.190975] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.697 [2024-04-24 19:44:20.191004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.698 [2024-04-24 19:44:20.201573] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.698 [2024-04-24 19:44:20.201602] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.212607] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.212648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.223688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.223716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.234391] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.234419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.247286] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.247313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.257599] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.257635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.268983] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.269011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.280138] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.280166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.293003] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.293031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.302451] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.302480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.314059] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.314086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.324890] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.956 [2024-04-24 19:44:20.324917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.956 [2024-04-24 19:44:20.335020] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.335047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.346231] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.346258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.357151] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.357179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.368012] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.368040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.378757] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.378784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.391088] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.391115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.400862] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.400890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.412421] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.412449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.425168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.425195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.434429] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.434456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.445817] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.445844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.456047] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.456075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:38.957 [2024-04-24 19:44:20.467080] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:38.957 [2024-04-24 19:44:20.467107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.215 [2024-04-24 19:44:20.478155] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.215 [2024-04-24 19:44:20.478182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.488924] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.488951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.499413] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.499440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.509966] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.509994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.520521] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.520549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.531543] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.531570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.542347] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.542374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.552943] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.552970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.563640] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.563670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.574762] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.574790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.585733] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.585761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.595982] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.596010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.607519] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.607546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.617840] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.617867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.629015] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.629043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.640101] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.640130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.651121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.651149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.662034] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.662061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.672915] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.672942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.684330] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.684358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.695358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.695386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.708112] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.708139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.717487] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.717515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.216 [2024-04-24 19:44:20.729320] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.216 [2024-04-24 19:44:20.729348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.740198] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.740225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.751069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.751097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.762008] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.762035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.772823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.772850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.783551] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.783578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.795151] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.795178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.806551] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.806578] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.817314] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.817342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.828738] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.828766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.840185] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.840212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.851066] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.851093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.861929] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.861956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.872708] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.872735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.885300] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.885328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.895688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.895716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.906994] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.907023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.918025] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.918054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.928444] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.928478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.938952] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.938980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.949823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.949851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.960570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.960599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.971300] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.477 [2024-04-24 19:44:20.971328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.477 [2024-04-24 19:44:20.981914] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.478 [2024-04-24 19:44:20.981942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:20.992446] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:20.992474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.004913] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.004941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.013980] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.014007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.025875] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.025904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.038463] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.038495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.048223] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.048252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.060174] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.060205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.071264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.071295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.082497] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.082528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.095764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.095792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.106056] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.106084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.117599] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.117635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.129063] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.129091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.140809] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.140844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.151557] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.151585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.162465] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.162494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.173262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.173291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.184500] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.184531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.195580] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.195608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.206298] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.206326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.216609] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.216642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.227246] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.227274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.237616] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.237651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.737 [2024-04-24 19:44:21.248282] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.737 [2024-04-24 19:44:21.248311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.259111] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.259140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.270195] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.270223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.283232] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.283262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.293069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.293099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.305229] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.305260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.316425] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.316454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.327223] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.327252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.337957] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.337986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.349175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.349211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.360644] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.360673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.371708] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.371735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.382541] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.382569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.394088] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.394116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.405511] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.405540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.416374] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.416402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.427105] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.427133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.438053] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.438081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.448973] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.449000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.459976] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.460004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.470747] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.470774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.481460] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.481487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.492525] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.492552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:39.996 [2024-04-24 19:44:21.503585] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:39.996 [2024-04-24 19:44:21.503613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.514523] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.514551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.525220] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.525248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.538224] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.538252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.548390] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.548419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.559723] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.559758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.570373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.570401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.581203] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.581231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.592354] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.592381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.603286] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.603313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.613348] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.613376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.624808] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.624835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.635788] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.635816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.646706] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.646735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.657688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.657715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.668720] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.668749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.679623] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.679657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.690422] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.690450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.701486] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.701514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.711971] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.712000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.722891] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.722919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.733830] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.255 [2024-04-24 19:44:21.733858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.255 [2024-04-24 19:44:21.744893] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.256 [2024-04-24 19:44:21.744919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.256 [2024-04-24 19:44:21.755316] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.256 [2024-04-24 19:44:21.755343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.256 [2024-04-24 19:44:21.766218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.256 [2024-04-24 19:44:21.766252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.777047] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.777076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.787493] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.787521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.798044] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.798071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.808859] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.808887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.819672] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.819726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.830982] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.831009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.843426] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.843454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.853017] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.853046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.864498] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.864527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.875720] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.875748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.887152] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.887180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.897883] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.897911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.513 [2024-04-24 19:44:21.908661] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.513 [2024-04-24 19:44:21.908688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:21.919738] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:21.919765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:21.931215] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:21.931244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:21.942411] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:21.942438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:21.953246] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:21.953274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:21.964069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:21.964096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:21.974569] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:21.974597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:21.985246] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:21.985273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:21.995923] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:21.995950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:22.006520] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:22.006548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.514 [2024-04-24 19:44:22.017080] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.514 [2024-04-24 19:44:22.017108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.028666] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.028694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.040975] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.041002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.050115] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.050142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.063056] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.063084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.072707] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.072735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.083662] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.083689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.094218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.094245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.104774] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.104802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.115251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.115278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.126042] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.126069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.139209] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.139237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.149159] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.149188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.160259] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.160287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.171494] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.171523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.182560] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.182588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.195024] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.195051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.204527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.204554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.216126] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.216154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.228345] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.228374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.237864] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.237892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.250043] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.250072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.260127] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.260155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.270946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.270975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:40.772 [2024-04-24 19:44:22.284098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:40.772 [2024-04-24 19:44:22.284127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.294124] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.294153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.305447] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.305486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.316167] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.316195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.326878] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.326905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.338139] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.338168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.348696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.348723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.361353] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.361381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.370957] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.370985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.382018] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.382046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.392911] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.392954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.403987] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.404017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.414968] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.414998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.425827] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.425855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.436732] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.436760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.447730] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.447759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.458784] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.458812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.469578] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.469607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.480722] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.480751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.490358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.490387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.502071] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.502100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.514333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.514361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.524023] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.524053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.033 [2024-04-24 19:44:22.535772] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.033 [2024-04-24 19:44:22.535799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.546923] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.546952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.560566] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.560594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.571351] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.571383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.582514] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.582542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.593479] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.593515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.604527] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.604556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.615332] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.615362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.626218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.626248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.637406] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.637435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.648790] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.648819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.659944] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.659972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.670256] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.670285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.681232] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.294 [2024-04-24 19:44:22.681262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.294 [2024-04-24 19:44:22.691809] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.691836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.702286] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.702314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.713007] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.713035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.723769] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.723798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.736466] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.736493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.746077] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.746104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.757149] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.757176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.768034] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.768062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.780757] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.780786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.790421] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.790451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.295 [2024-04-24 19:44:22.802659] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.295 [2024-04-24 19:44:22.802709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.813476] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.813506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.824555] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.824582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.835306] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.835334] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.846062] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.846089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.857240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.857268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.868139] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.868167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.880808] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.880837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.898690] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.898721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.909355] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.909383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.919541] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.919569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.931066] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.931095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.941335] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.941362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.952981] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.953009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.963897] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.963925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.974404] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.974432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.985386] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.985414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:22.996446] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:22.996474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:23.008120] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:23.008149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:23.019320] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:23.019357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:23.032601] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:23.032637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:23.042494] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:23.042523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:23.053717] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:23.053744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.554 [2024-04-24 19:44:23.064587] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.554 [2024-04-24 19:44:23.064617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.076008] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.076037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.087176] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.087205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.097855] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.097883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.108652] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.108679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.118889] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.118916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.129482] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.129524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.140014] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.140043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.150399] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.150426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.160896] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.160923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.171341] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.171369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.181951] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.181980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.192423] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.192451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.203101] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.203128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.214048] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.214076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.224708] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.224742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.235986] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.236014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.245831] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.245858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.257068] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.257095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.268084] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.268112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.278919] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.278946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.289831] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.289860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.300795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.300822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.311723] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.311751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:41.813 [2024-04-24 19:44:23.322439] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:41.813 [2024-04-24 19:44:23.322467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.072 [2024-04-24 19:44:23.333359] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.072 [2024-04-24 19:44:23.333387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.072 [2024-04-24 19:44:23.344179] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.072 [2024-04-24 19:44:23.344206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.072 [2024-04-24 19:44:23.355184] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.072 [2024-04-24 19:44:23.355226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.366101] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.366128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.377065] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.377093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.387898] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.387925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.398649] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.398677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.409515] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.409542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.422309] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.422337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.432167] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.432202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.443847] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.443874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.454673] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.454700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.464924] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.464951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.475789] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.475816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.487372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.487399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 00:13:42.073 Latency(us) 00:13:42.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.073 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:42.073 Nvme1n1 : 5.01 11671.43 91.18 0.00 0.00 10952.59 4636.07 26214.40 00:13:42.073 =================================================================================================================== 00:13:42.073 Total : 11671.43 91.18 0.00 0.00 10952.59 4636.07 26214.40 00:13:42.073 [2024-04-24 19:44:23.492591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.492620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.500617] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.500655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.508644] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.508686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.516693] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.516732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.524735] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.524784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.532750] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.532797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.540768] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.540817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.548791] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.548839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.556811] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.556860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.564830] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.564876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.572857] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.572903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.073 [2024-04-24 19:44:23.580880] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.073 [2024-04-24 19:44:23.580928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.588933] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.588981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.596928] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.596975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.604947] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.604994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.612966] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.613014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.620958] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.620993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.628967] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.629006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.637005] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.637030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.645026] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.645053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.653053] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.653084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.661107] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.661155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.669130] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.669179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.677108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.677135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.685127] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.685153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.693158] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.693185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.701171] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.701197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.709211] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.709246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.717260] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.717310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.725282] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.725332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.733258] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.733283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.741279] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.741303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 [2024-04-24 19:44:23.749301] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:42.332 [2024-04-24 19:44:23.749326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:42.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1682726) - No such process 00:13:42.332 19:44:23 -- target/zcopy.sh@49 -- # wait 1682726 00:13:42.332 19:44:23 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.332 19:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.332 19:44:23 -- common/autotest_common.sh@10 -- # set +x 00:13:42.332 19:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.332 19:44:23 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:42.332 19:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.332 19:44:23 -- common/autotest_common.sh@10 -- # set +x 00:13:42.332 delay0 00:13:42.332 19:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.332 19:44:23 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:42.332 19:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:42.332 19:44:23 -- common/autotest_common.sh@10 -- # set +x 00:13:42.332 19:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:42.332 19:44:23 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:42.332 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.590 [2024-04-24 19:44:23.878337] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:49.154 Initializing NVMe Controllers 00:13:49.154 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.154 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:49.154 Initialization complete. Launching workers. 00:13:49.154 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 108 00:13:49.154 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 398, failed to submit 30 00:13:49.154 success 201, unsuccess 197, failed 0 00:13:49.154 19:44:30 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:49.154 19:44:30 -- target/zcopy.sh@60 -- # nvmftestfini 00:13:49.154 19:44:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:49.154 19:44:30 -- nvmf/common.sh@117 -- # sync 00:13:49.154 19:44:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.154 19:44:30 -- nvmf/common.sh@120 -- # set +e 00:13:49.154 19:44:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.154 19:44:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.154 rmmod nvme_tcp 00:13:49.154 rmmod nvme_fabrics 00:13:49.154 rmmod nvme_keyring 00:13:49.154 19:44:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.154 19:44:30 -- nvmf/common.sh@124 -- # set -e 00:13:49.154 19:44:30 -- nvmf/common.sh@125 -- # return 0 00:13:49.154 19:44:30 -- nvmf/common.sh@478 -- # '[' -n 1681390 ']' 00:13:49.154 19:44:30 -- nvmf/common.sh@479 -- # killprocess 1681390 00:13:49.154 19:44:30 -- common/autotest_common.sh@936 -- # '[' -z 1681390 ']' 00:13:49.154 19:44:30 -- common/autotest_common.sh@940 -- # kill -0 1681390 00:13:49.154 19:44:30 -- common/autotest_common.sh@941 -- # uname 00:13:49.154 19:44:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:49.154 19:44:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1681390 00:13:49.154 19:44:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:49.154 19:44:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:49.154 19:44:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1681390' 00:13:49.154 killing process with pid 1681390 00:13:49.154 19:44:30 -- common/autotest_common.sh@955 -- # kill 1681390 00:13:49.154 19:44:30 -- common/autotest_common.sh@960 -- # wait 1681390 00:13:49.154 19:44:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:49.154 19:44:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:49.154 19:44:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:49.154 19:44:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.154 19:44:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.154 19:44:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.154 19:44:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.154 19:44:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.055 19:44:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.055 00:13:51.055 real 0m27.960s 00:13:51.055 user 0m41.226s 00:13:51.055 sys 0m8.445s 00:13:51.055 19:44:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:51.055 19:44:32 -- common/autotest_common.sh@10 -- # set +x 00:13:51.055 ************************************ 00:13:51.055 END TEST nvmf_zcopy 00:13:51.055 ************************************ 00:13:51.314 19:44:32 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:51.314 19:44:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:51.314 19:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.314 19:44:32 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 ************************************ 00:13:51.314 START TEST nvmf_nmic 00:13:51.314 ************************************ 00:13:51.314 19:44:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:51.314 * Looking for test storage... 00:13:51.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.314 19:44:32 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.314 19:44:32 -- nvmf/common.sh@7 -- # uname -s 00:13:51.314 19:44:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.314 19:44:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.314 19:44:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.314 19:44:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.314 19:44:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.314 19:44:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.314 19:44:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.314 19:44:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.314 19:44:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.314 19:44:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.314 19:44:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:51.314 19:44:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:51.314 19:44:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.314 19:44:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.314 19:44:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.314 19:44:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.314 19:44:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.314 19:44:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.314 19:44:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.314 19:44:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.314 19:44:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.314 19:44:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.314 19:44:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.314 19:44:32 -- paths/export.sh@5 -- # export PATH 00:13:51.314 19:44:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.314 19:44:32 -- nvmf/common.sh@47 -- # : 0 00:13:51.314 19:44:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.314 19:44:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.314 19:44:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.314 19:44:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.314 19:44:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.314 19:44:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.314 19:44:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.314 19:44:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.314 19:44:32 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:51.315 19:44:32 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:51.315 19:44:32 -- target/nmic.sh@14 -- # nvmftestinit 00:13:51.315 19:44:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:51.315 19:44:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.315 19:44:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:51.315 19:44:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:51.315 19:44:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:51.315 19:44:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.315 19:44:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.315 19:44:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.315 19:44:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:51.315 19:44:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:51.315 19:44:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.315 19:44:32 -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 19:44:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:53.848 19:44:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.848 19:44:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.848 19:44:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.848 19:44:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.848 19:44:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.848 19:44:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.848 19:44:34 -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.848 19:44:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.848 19:44:34 -- nvmf/common.sh@296 -- # e810=() 00:13:53.848 19:44:34 -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.848 19:44:34 -- nvmf/common.sh@297 -- # x722=() 00:13:53.848 19:44:34 -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.848 19:44:34 -- nvmf/common.sh@298 -- # mlx=() 00:13:53.848 19:44:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.848 19:44:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.848 19:44:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.848 19:44:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.848 19:44:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.848 19:44:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.848 19:44:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:53.848 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:53.848 19:44:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.848 19:44:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:53.848 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:53.848 19:44:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.848 19:44:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.848 19:44:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.848 19:44:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:53.848 19:44:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.848 19:44:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:53.848 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:53.848 19:44:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.848 19:44:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.848 19:44:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.848 19:44:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:53.848 19:44:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.848 19:44:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:53.848 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:53.848 19:44:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.848 19:44:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:53.848 19:44:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:53.848 19:44:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:53.848 19:44:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.848 19:44:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.848 19:44:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.848 19:44:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.848 19:44:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.848 19:44:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.848 19:44:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.848 19:44:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.848 19:44:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.848 19:44:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.848 19:44:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.848 19:44:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.848 19:44:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.848 19:44:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.848 19:44:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.848 19:44:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.848 19:44:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.848 19:44:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.848 19:44:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.848 19:44:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:13:53.848 00:13:53.848 --- 10.0.0.2 ping statistics --- 00:13:53.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.848 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:13:53.848 19:44:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:13:53.848 00:13:53.848 --- 10.0.0.1 ping statistics --- 00:13:53.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.848 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:13:53.848 19:44:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.848 19:44:34 -- nvmf/common.sh@411 -- # return 0 00:13:53.848 19:44:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:53.848 19:44:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.848 19:44:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:53.848 19:44:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.848 19:44:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:53.848 19:44:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:53.848 19:44:35 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:53.848 19:44:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:53.848 19:44:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:53.848 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 19:44:35 -- nvmf/common.sh@470 -- # nvmfpid=1686113 00:13:53.848 19:44:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.848 19:44:35 -- nvmf/common.sh@471 -- # waitforlisten 1686113 00:13:53.848 19:44:35 -- common/autotest_common.sh@817 -- # '[' -z 1686113 ']' 00:13:53.848 19:44:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.848 19:44:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:53.848 19:44:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.848 19:44:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:53.848 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:53.849 [2024-04-24 19:44:35.052793] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:13:53.849 [2024-04-24 19:44:35.052871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.849 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.849 [2024-04-24 19:44:35.118229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.849 [2024-04-24 19:44:35.232596] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.849 [2024-04-24 19:44:35.232670] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.849 [2024-04-24 19:44:35.232685] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.849 [2024-04-24 19:44:35.232697] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.849 [2024-04-24 19:44:35.232708] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.849 [2024-04-24 19:44:35.232773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.849 [2024-04-24 19:44:35.232835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.849 [2024-04-24 19:44:35.232884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.849 [2024-04-24 19:44:35.232886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.108 19:44:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:54.108 19:44:35 -- common/autotest_common.sh@850 -- # return 0 00:13:54.108 19:44:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:54.108 19:44:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 19:44:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.108 19:44:35 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:54.108 19:44:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 [2024-04-24 19:44:35.391480] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.108 19:44:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.108 19:44:35 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:54.108 19:44:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 Malloc0 00:13:54.108 19:44:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.108 19:44:35 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:54.108 19:44:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 19:44:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.108 19:44:35 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:54.108 19:44:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 19:44:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.108 19:44:35 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.108 19:44:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 [2024-04-24 19:44:35.445010] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.108 19:44:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.108 19:44:35 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:54.108 test case1: single bdev can't be used in multiple subsystems 00:13:54.108 19:44:35 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:54.108 19:44:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 19:44:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.108 19:44:35 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:54.108 19:44:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 19:44:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.108 19:44:35 -- target/nmic.sh@28 -- # nmic_status=0 00:13:54.108 19:44:35 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:54.108 19:44:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 [2024-04-24 19:44:35.468865] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:54.108 [2024-04-24 19:44:35.468896] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:54.108 [2024-04-24 19:44:35.468911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:54.108 request: 00:13:54.108 { 00:13:54.108 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:54.108 "namespace": { 00:13:54.108 "bdev_name": "Malloc0", 00:13:54.108 "no_auto_visible": false 00:13:54.108 }, 00:13:54.108 "method": "nvmf_subsystem_add_ns", 00:13:54.108 "req_id": 1 00:13:54.108 } 00:13:54.108 Got JSON-RPC error response 00:13:54.108 response: 00:13:54.108 { 00:13:54.108 "code": -32602, 00:13:54.108 "message": "Invalid parameters" 00:13:54.108 } 00:13:54.108 19:44:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:54.108 19:44:35 -- target/nmic.sh@29 -- # nmic_status=1 00:13:54.108 19:44:35 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:54.108 19:44:35 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:54.108 Adding namespace failed - expected result. 00:13:54.108 19:44:35 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:54.108 test case2: host connect to nvmf target in multiple paths 00:13:54.108 19:44:35 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:54.108 19:44:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.108 19:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 [2024-04-24 19:44:35.476990] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:54.108 19:44:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.108 19:44:35 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:54.674 19:44:36 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:55.239 19:44:36 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:55.239 19:44:36 -- common/autotest_common.sh@1184 -- # local i=0 00:13:55.239 19:44:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.239 19:44:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:55.239 19:44:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:57.764 19:44:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:57.764 19:44:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:57.764 19:44:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.764 19:44:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:57.764 19:44:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.764 19:44:38 -- common/autotest_common.sh@1194 -- # return 0 00:13:57.764 19:44:38 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:57.764 [global] 00:13:57.764 thread=1 00:13:57.764 invalidate=1 00:13:57.764 rw=write 00:13:57.764 time_based=1 00:13:57.764 runtime=1 00:13:57.764 ioengine=libaio 00:13:57.764 direct=1 00:13:57.764 bs=4096 00:13:57.764 iodepth=1 00:13:57.764 norandommap=0 00:13:57.764 numjobs=1 00:13:57.764 00:13:57.764 verify_dump=1 00:13:57.764 verify_backlog=512 00:13:57.764 verify_state_save=0 00:13:57.764 do_verify=1 00:13:57.764 verify=crc32c-intel 00:13:57.764 [job0] 00:13:57.764 filename=/dev/nvme0n1 00:13:57.764 Could not set queue depth (nvme0n1) 00:13:57.764 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.764 fio-3.35 00:13:57.764 Starting 1 thread 00:13:58.698 00:13:58.698 job0: (groupid=0, jobs=1): err= 0: pid=1686640: Wed Apr 24 19:44:40 2024 00:13:58.698 read: IOPS=20, BW=81.1KiB/s (83.0kB/s)(84.0KiB/1036msec) 00:13:58.698 slat (nsec): min=11542, max=35985, avg=18822.67, stdev=7080.30 00:13:58.698 clat (usec): min=40896, max=42030, avg=41329.68, stdev=474.66 00:13:58.698 lat (usec): min=40921, max=42045, avg=41348.51, stdev=475.51 00:13:58.698 clat percentiles (usec): 00:13:58.698 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:58.698 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:58.698 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:13:58.698 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:58.698 | 99.99th=[42206] 00:13:58.698 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:13:58.698 slat (nsec): min=7052, max=66443, avg=18783.29, stdev=8550.85 00:13:58.698 clat (usec): min=210, max=448, avg=302.22, stdev=39.82 00:13:58.698 lat (usec): min=217, max=494, avg=321.01, stdev=44.03 00:13:58.698 clat percentiles (usec): 00:13:58.698 | 1.00th=[ 225], 5.00th=[ 241], 10.00th=[ 251], 20.00th=[ 269], 00:13:58.698 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 310], 00:13:58.698 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 347], 95.00th=[ 371], 00:13:58.698 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 449], 99.95th=[ 449], 00:13:58.698 | 99.99th=[ 449] 00:13:58.698 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:58.698 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:58.698 lat (usec) : 250=9.38%, 500=86.68% 00:13:58.698 lat (msec) : 50=3.94% 00:13:58.698 cpu : usr=0.39%, sys=1.55%, ctx=533, majf=0, minf=2 00:13:58.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.698 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.698 00:13:58.698 Run status group 0 (all jobs): 00:13:58.698 READ: bw=81.1KiB/s (83.0kB/s), 81.1KiB/s-81.1KiB/s (83.0kB/s-83.0kB/s), io=84.0KiB (86.0kB), run=1036-1036msec 00:13:58.698 WRITE: bw=1977KiB/s (2024kB/s), 1977KiB/s-1977KiB/s (2024kB/s-2024kB/s), io=2048KiB (2097kB), run=1036-1036msec 00:13:58.698 00:13:58.698 Disk stats (read/write): 00:13:58.698 nvme0n1: ios=67/512, merge=0/0, ticks=957/159, in_queue=1116, util=96.49% 00:13:58.698 19:44:40 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:58.956 19:44:40 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.956 19:44:40 -- common/autotest_common.sh@1205 -- # local i=0 00:13:58.956 19:44:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:58.956 19:44:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.956 19:44:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:58.956 19:44:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.956 19:44:40 -- common/autotest_common.sh@1217 -- # return 0 00:13:58.956 19:44:40 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:58.956 19:44:40 -- target/nmic.sh@53 -- # nvmftestfini 00:13:58.956 19:44:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:58.956 19:44:40 -- nvmf/common.sh@117 -- # sync 00:13:58.956 19:44:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.956 19:44:40 -- nvmf/common.sh@120 -- # set +e 00:13:58.956 19:44:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.956 19:44:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.956 rmmod nvme_tcp 00:13:58.956 rmmod nvme_fabrics 00:13:58.956 rmmod nvme_keyring 00:13:58.956 19:44:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.956 19:44:40 -- nvmf/common.sh@124 -- # set -e 00:13:58.956 19:44:40 -- nvmf/common.sh@125 -- # return 0 00:13:58.956 19:44:40 -- nvmf/common.sh@478 -- # '[' -n 1686113 ']' 00:13:58.956 19:44:40 -- nvmf/common.sh@479 -- # killprocess 1686113 00:13:58.956 19:44:40 -- common/autotest_common.sh@936 -- # '[' -z 1686113 ']' 00:13:58.956 19:44:40 -- common/autotest_common.sh@940 -- # kill -0 1686113 00:13:58.956 19:44:40 -- common/autotest_common.sh@941 -- # uname 00:13:58.956 19:44:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.956 19:44:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1686113 00:13:58.956 19:44:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:58.956 19:44:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:58.956 19:44:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1686113' 00:13:58.956 killing process with pid 1686113 00:13:58.956 19:44:40 -- common/autotest_common.sh@955 -- # kill 1686113 00:13:58.956 19:44:40 -- common/autotest_common.sh@960 -- # wait 1686113 00:13:59.525 19:44:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:59.525 19:44:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:59.525 19:44:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:59.525 19:44:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.525 19:44:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.525 19:44:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.525 19:44:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.525 19:44:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.432 19:44:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:01.432 00:14:01.432 real 0m10.133s 00:14:01.432 user 0m22.909s 00:14:01.432 sys 0m2.389s 00:14:01.432 19:44:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:01.432 19:44:42 -- common/autotest_common.sh@10 -- # set +x 00:14:01.432 ************************************ 00:14:01.432 END TEST nvmf_nmic 00:14:01.432 ************************************ 00:14:01.432 19:44:42 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:01.432 19:44:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:01.432 19:44:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:01.432 19:44:42 -- common/autotest_common.sh@10 -- # set +x 00:14:01.432 ************************************ 00:14:01.432 START TEST nvmf_fio_target 00:14:01.432 ************************************ 00:14:01.432 19:44:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:01.691 * Looking for test storage... 00:14:01.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.691 19:44:42 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.691 19:44:42 -- nvmf/common.sh@7 -- # uname -s 00:14:01.691 19:44:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.691 19:44:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.691 19:44:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.691 19:44:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.691 19:44:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.691 19:44:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.691 19:44:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.691 19:44:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.691 19:44:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.691 19:44:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.691 19:44:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:01.691 19:44:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:01.691 19:44:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.691 19:44:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.691 19:44:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.691 19:44:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.691 19:44:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.691 19:44:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.691 19:44:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.691 19:44:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.691 19:44:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.691 19:44:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.691 19:44:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.691 19:44:42 -- paths/export.sh@5 -- # export PATH 00:14:01.691 19:44:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.691 19:44:42 -- nvmf/common.sh@47 -- # : 0 00:14:01.691 19:44:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.691 19:44:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.691 19:44:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.691 19:44:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.691 19:44:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.691 19:44:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.691 19:44:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.691 19:44:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.691 19:44:42 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.691 19:44:42 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.691 19:44:42 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:01.691 19:44:42 -- target/fio.sh@16 -- # nvmftestinit 00:14:01.691 19:44:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:01.691 19:44:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.691 19:44:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:01.691 19:44:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:01.691 19:44:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:01.691 19:44:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.691 19:44:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.691 19:44:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.691 19:44:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:01.691 19:44:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:01.691 19:44:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.691 19:44:42 -- common/autotest_common.sh@10 -- # set +x 00:14:03.593 19:44:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:03.593 19:44:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.593 19:44:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.593 19:44:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.593 19:44:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.593 19:44:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.593 19:44:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.593 19:44:44 -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.593 19:44:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.593 19:44:44 -- nvmf/common.sh@296 -- # e810=() 00:14:03.593 19:44:44 -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.593 19:44:44 -- nvmf/common.sh@297 -- # x722=() 00:14:03.593 19:44:44 -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.593 19:44:44 -- nvmf/common.sh@298 -- # mlx=() 00:14:03.593 19:44:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.593 19:44:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.593 19:44:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.593 19:44:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.594 19:44:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.594 19:44:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.594 19:44:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:03.594 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:03.594 19:44:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.594 19:44:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:03.594 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:03.594 19:44:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.594 19:44:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.594 19:44:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.594 19:44:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:03.594 19:44:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.594 19:44:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:03.594 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:03.594 19:44:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.594 19:44:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.594 19:44:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.594 19:44:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:03.594 19:44:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.594 19:44:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:03.594 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:03.594 19:44:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.594 19:44:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:03.594 19:44:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:03.594 19:44:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:03.594 19:44:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:03.594 19:44:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.594 19:44:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.594 19:44:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.594 19:44:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.594 19:44:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.594 19:44:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.594 19:44:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.594 19:44:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.594 19:44:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.594 19:44:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.594 19:44:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.594 19:44:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.594 19:44:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.594 19:44:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.594 19:44:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.594 19:44:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.594 19:44:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.594 19:44:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.594 19:44:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.594 19:44:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:14:03.594 00:14:03.594 --- 10.0.0.2 ping statistics --- 00:14:03.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.594 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:14:03.594 19:44:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:14:03.594 00:14:03.594 --- 10.0.0.1 ping statistics --- 00:14:03.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.594 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:14:03.594 19:44:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.594 19:44:45 -- nvmf/common.sh@411 -- # return 0 00:14:03.594 19:44:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:03.853 19:44:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.853 19:44:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:03.854 19:44:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:03.854 19:44:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.854 19:44:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:03.854 19:44:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:03.854 19:44:45 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:03.854 19:44:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:03.854 19:44:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:03.854 19:44:45 -- common/autotest_common.sh@10 -- # set +x 00:14:03.854 19:44:45 -- nvmf/common.sh@470 -- # nvmfpid=1688832 00:14:03.854 19:44:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:03.854 19:44:45 -- nvmf/common.sh@471 -- # waitforlisten 1688832 00:14:03.854 19:44:45 -- common/autotest_common.sh@817 -- # '[' -z 1688832 ']' 00:14:03.854 19:44:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.854 19:44:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:03.854 19:44:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.854 19:44:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:03.854 19:44:45 -- common/autotest_common.sh@10 -- # set +x 00:14:03.854 [2024-04-24 19:44:45.182789] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:14:03.854 [2024-04-24 19:44:45.182868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.854 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.854 [2024-04-24 19:44:45.248182] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.854 [2024-04-24 19:44:45.359156] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.854 [2024-04-24 19:44:45.359214] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.854 [2024-04-24 19:44:45.359242] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.854 [2024-04-24 19:44:45.359253] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.854 [2024-04-24 19:44:45.359262] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.854 [2024-04-24 19:44:45.359352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.854 [2024-04-24 19:44:45.359417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.854 [2024-04-24 19:44:45.359485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.854 [2024-04-24 19:44:45.359488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.112 19:44:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:04.112 19:44:45 -- common/autotest_common.sh@850 -- # return 0 00:14:04.112 19:44:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:04.112 19:44:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:04.112 19:44:45 -- common/autotest_common.sh@10 -- # set +x 00:14:04.112 19:44:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.112 19:44:45 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:04.369 [2024-04-24 19:44:45.771256] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.369 19:44:45 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:04.628 19:44:46 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:04.628 19:44:46 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:04.886 19:44:46 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:04.886 19:44:46 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.145 19:44:46 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:05.145 19:44:46 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.402 19:44:46 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:05.402 19:44:46 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:05.660 19:44:47 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:05.918 19:44:47 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:05.918 19:44:47 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.177 19:44:47 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:06.177 19:44:47 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.435 19:44:47 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:06.435 19:44:47 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:06.693 19:44:48 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:06.951 19:44:48 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:06.951 19:44:48 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:07.209 19:44:48 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:07.209 19:44:48 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:07.466 19:44:48 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.724 [2024-04-24 19:44:49.099904] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.724 19:44:49 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:07.981 19:44:49 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:08.239 19:44:49 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.805 19:44:50 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:08.805 19:44:50 -- common/autotest_common.sh@1184 -- # local i=0 00:14:08.805 19:44:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.805 19:44:50 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:14:08.805 19:44:50 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:14:08.805 19:44:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:11.333 19:44:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:11.333 19:44:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:11.333 19:44:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.333 19:44:52 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:14:11.333 19:44:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.333 19:44:52 -- common/autotest_common.sh@1194 -- # return 0 00:14:11.333 19:44:52 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:11.333 [global] 00:14:11.333 thread=1 00:14:11.333 invalidate=1 00:14:11.333 rw=write 00:14:11.333 time_based=1 00:14:11.333 runtime=1 00:14:11.333 ioengine=libaio 00:14:11.333 direct=1 00:14:11.333 bs=4096 00:14:11.333 iodepth=1 00:14:11.333 norandommap=0 00:14:11.333 numjobs=1 00:14:11.333 00:14:11.333 verify_dump=1 00:14:11.333 verify_backlog=512 00:14:11.333 verify_state_save=0 00:14:11.333 do_verify=1 00:14:11.333 verify=crc32c-intel 00:14:11.333 [job0] 00:14:11.333 filename=/dev/nvme0n1 00:14:11.333 [job1] 00:14:11.333 filename=/dev/nvme0n2 00:14:11.333 [job2] 00:14:11.333 filename=/dev/nvme0n3 00:14:11.333 [job3] 00:14:11.333 filename=/dev/nvme0n4 00:14:11.333 Could not set queue depth (nvme0n1) 00:14:11.333 Could not set queue depth (nvme0n2) 00:14:11.333 Could not set queue depth (nvme0n3) 00:14:11.333 Could not set queue depth (nvme0n4) 00:14:11.333 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.333 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.333 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.333 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:11.333 fio-3.35 00:14:11.333 Starting 4 threads 00:14:12.267 00:14:12.267 job0: (groupid=0, jobs=1): err= 0: pid=1689891: Wed Apr 24 19:44:53 2024 00:14:12.267 read: IOPS=654, BW=2616KiB/s (2679kB/s)(2692KiB/1029msec) 00:14:12.267 slat (nsec): min=5650, max=54006, avg=13234.48, stdev=6095.80 00:14:12.267 clat (usec): min=317, max=41455, avg=1012.57, stdev=4929.77 00:14:12.267 lat (usec): min=325, max=41462, avg=1025.81, stdev=4930.11 00:14:12.267 clat percentiles (usec): 00:14:12.267 | 1.00th=[ 330], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 363], 00:14:12.267 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 400], 00:14:12.267 | 70.00th=[ 429], 80.00th=[ 457], 90.00th=[ 490], 95.00th=[ 537], 00:14:12.267 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:14:12.267 | 99.99th=[41681] 00:14:12.267 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:14:12.267 slat (nsec): min=7317, max=76924, avg=17667.75, stdev=11827.67 00:14:12.267 clat (usec): min=198, max=742, avg=305.20, stdev=58.22 00:14:12.267 lat (usec): min=205, max=766, avg=322.87, stdev=63.89 00:14:12.267 clat percentiles (usec): 00:14:12.267 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 223], 20.00th=[ 251], 00:14:12.267 | 30.00th=[ 273], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 318], 00:14:12.267 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 379], 95.00th=[ 404], 00:14:12.267 | 99.00th=[ 449], 99.50th=[ 461], 99.90th=[ 490], 99.95th=[ 742], 00:14:12.267 | 99.99th=[ 742] 00:14:12.267 bw ( KiB/s): min= 3472, max= 4710, per=34.26%, avg=4091.00, stdev=875.40, samples=2 00:14:12.267 iops : min= 868, max= 1177, avg=1022.50, stdev=218.50, samples=2 00:14:12.267 lat (usec) : 250=11.84%, 500=85.33%, 750=2.00%, 1000=0.24% 00:14:12.267 lat (msec) : 50=0.59% 00:14:12.267 cpu : usr=1.75%, sys=3.50%, ctx=1699, majf=0, minf=2 00:14:12.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.267 issued rwts: total=673,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.267 job1: (groupid=0, jobs=1): err= 0: pid=1689901: Wed Apr 24 19:44:53 2024 00:14:12.267 read: IOPS=463, BW=1852KiB/s (1897kB/s)(1904KiB/1028msec) 00:14:12.267 slat (nsec): min=6325, max=44515, avg=16513.51, stdev=6269.87 00:14:12.267 clat (usec): min=402, max=41316, avg=1736.44, stdev=6847.84 00:14:12.267 lat (usec): min=410, max=41328, avg=1752.95, stdev=6848.01 00:14:12.267 clat percentiles (usec): 00:14:12.267 | 1.00th=[ 433], 5.00th=[ 482], 10.00th=[ 486], 20.00th=[ 506], 00:14:12.267 | 30.00th=[ 523], 40.00th=[ 529], 50.00th=[ 529], 60.00th=[ 537], 00:14:12.267 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 570], 95.00th=[ 627], 00:14:12.268 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:12.268 | 99.99th=[41157] 00:14:12.268 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:14:12.268 slat (nsec): min=9176, max=77278, avg=28681.50, stdev=12147.01 00:14:12.268 clat (usec): min=259, max=752, avg=335.90, stdev=53.89 00:14:12.268 lat (usec): min=269, max=771, avg=364.58, stdev=58.00 00:14:12.268 clat percentiles (usec): 00:14:12.268 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:14:12.268 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 330], 00:14:12.268 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 441], 00:14:12.268 | 99.00th=[ 490], 99.50th=[ 553], 99.90th=[ 750], 99.95th=[ 750], 00:14:12.268 | 99.99th=[ 750] 00:14:12.268 bw ( KiB/s): min= 4087, max= 4087, per=34.22%, avg=4087.00, stdev= 0.00, samples=1 00:14:12.268 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:14:12.268 lat (usec) : 500=59.92%, 750=38.16%, 1000=0.20% 00:14:12.268 lat (msec) : 2=0.10%, 4=0.20%, 50=1.42% 00:14:12.268 cpu : usr=0.97%, sys=3.51%, ctx=991, majf=0, minf=1 00:14:12.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.268 issued rwts: total=476,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.268 job2: (groupid=0, jobs=1): err= 0: pid=1689902: Wed Apr 24 19:44:53 2024 00:14:12.268 read: IOPS=433, BW=1734KiB/s (1776kB/s)(1736KiB/1001msec) 00:14:12.268 slat (nsec): min=4826, max=42159, avg=12116.30, stdev=4706.12 00:14:12.268 clat (usec): min=345, max=41037, avg=1870.11, stdev=7417.09 00:14:12.268 lat (usec): min=358, max=41071, avg=1882.23, stdev=7419.07 00:14:12.268 clat percentiles (usec): 00:14:12.268 | 1.00th=[ 351], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 412], 00:14:12.268 | 30.00th=[ 424], 40.00th=[ 429], 50.00th=[ 441], 60.00th=[ 453], 00:14:12.268 | 70.00th=[ 469], 80.00th=[ 486], 90.00th=[ 506], 95.00th=[ 537], 00:14:12.268 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:12.268 | 99.99th=[41157] 00:14:12.268 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:14:12.268 slat (nsec): min=7162, max=74424, avg=28184.67, stdev=12834.39 00:14:12.268 clat (usec): min=224, max=767, avg=319.86, stdev=63.51 00:14:12.268 lat (usec): min=244, max=786, avg=348.04, stdev=62.85 00:14:12.268 clat percentiles (usec): 00:14:12.268 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 260], 00:14:12.268 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 322], 60.00th=[ 338], 00:14:12.268 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 412], 00:14:12.268 | 99.00th=[ 478], 99.50th=[ 537], 99.90th=[ 766], 99.95th=[ 766], 00:14:12.268 | 99.99th=[ 766] 00:14:12.268 bw ( KiB/s): min= 4087, max= 4087, per=34.22%, avg=4087.00, stdev= 0.00, samples=1 00:14:12.268 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:14:12.268 lat (usec) : 250=7.82%, 500=86.58%, 750=3.70%, 1000=0.11% 00:14:12.268 lat (msec) : 4=0.11%, 10=0.11%, 50=1.59% 00:14:12.268 cpu : usr=0.60%, sys=2.40%, ctx=947, majf=0, minf=1 00:14:12.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.268 issued rwts: total=434,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.268 job3: (groupid=0, jobs=1): err= 0: pid=1689903: Wed Apr 24 19:44:53 2024 00:14:12.268 read: IOPS=675, BW=2701KiB/s (2766kB/s)(2704KiB/1001msec) 00:14:12.268 slat (nsec): min=5681, max=69857, avg=15379.16, stdev=10083.14 00:14:12.268 clat (usec): min=346, max=41573, avg=993.74, stdev=4664.21 00:14:12.268 lat (usec): min=358, max=41591, avg=1009.12, stdev=4664.53 00:14:12.268 clat percentiles (usec): 00:14:12.268 | 1.00th=[ 359], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 404], 00:14:12.268 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 437], 60.00th=[ 453], 00:14:12.268 | 70.00th=[ 482], 80.00th=[ 498], 90.00th=[ 523], 95.00th=[ 545], 00:14:12.268 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:14:12.268 | 99.99th=[41681] 00:14:12.268 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:12.268 slat (nsec): min=7380, max=78904, avg=20478.47, stdev=13523.22 00:14:12.268 clat (usec): min=204, max=587, avg=282.17, stdev=62.63 00:14:12.268 lat (usec): min=212, max=600, avg=302.64, stdev=71.49 00:14:12.268 clat percentiles (usec): 00:14:12.268 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 227], 00:14:12.268 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 262], 60.00th=[ 289], 00:14:12.268 | 70.00th=[ 314], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 396], 00:14:12.268 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 553], 99.95th=[ 586], 00:14:12.268 | 99.99th=[ 586] 00:14:12.268 bw ( KiB/s): min= 4087, max= 4087, per=34.22%, avg=4087.00, stdev= 0.00, samples=1 00:14:12.268 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:14:12.268 lat (usec) : 250=27.82%, 500=64.82%, 750=6.76% 00:14:12.268 lat (msec) : 4=0.06%, 50=0.53% 00:14:12.268 cpu : usr=1.40%, sys=3.80%, ctx=1701, majf=0, minf=1 00:14:12.268 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:12.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.268 issued rwts: total=676,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.268 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:12.268 00:14:12.268 Run status group 0 (all jobs): 00:14:12.268 READ: bw=8781KiB/s (8992kB/s), 1734KiB/s-2701KiB/s (1776kB/s-2766kB/s), io=9036KiB (9253kB), run=1001-1029msec 00:14:12.268 WRITE: bw=11.7MiB/s (12.2MB/s), 1992KiB/s-4092KiB/s (2040kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1029msec 00:14:12.268 00:14:12.268 Disk stats (read/write): 00:14:12.268 nvme0n1: ios=712/1024, merge=0/0, ticks=1200/301, in_queue=1501, util=85.47% 00:14:12.268 nvme0n2: ios=521/512, merge=0/0, ticks=950/150, in_queue=1100, util=91.25% 00:14:12.268 nvme0n3: ios=332/512, merge=0/0, ticks=1020/153, in_queue=1173, util=95.50% 00:14:12.268 nvme0n4: ios=571/742, merge=0/0, ticks=732/207, in_queue=939, util=96.31% 00:14:12.268 19:44:53 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:12.268 [global] 00:14:12.268 thread=1 00:14:12.268 invalidate=1 00:14:12.268 rw=randwrite 00:14:12.268 time_based=1 00:14:12.268 runtime=1 00:14:12.268 ioengine=libaio 00:14:12.268 direct=1 00:14:12.268 bs=4096 00:14:12.268 iodepth=1 00:14:12.268 norandommap=0 00:14:12.268 numjobs=1 00:14:12.268 00:14:12.525 verify_dump=1 00:14:12.525 verify_backlog=512 00:14:12.525 verify_state_save=0 00:14:12.525 do_verify=1 00:14:12.525 verify=crc32c-intel 00:14:12.525 [job0] 00:14:12.525 filename=/dev/nvme0n1 00:14:12.525 [job1] 00:14:12.525 filename=/dev/nvme0n2 00:14:12.525 [job2] 00:14:12.525 filename=/dev/nvme0n3 00:14:12.525 [job3] 00:14:12.525 filename=/dev/nvme0n4 00:14:12.525 Could not set queue depth (nvme0n1) 00:14:12.525 Could not set queue depth (nvme0n2) 00:14:12.525 Could not set queue depth (nvme0n3) 00:14:12.525 Could not set queue depth (nvme0n4) 00:14:12.525 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.525 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.525 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.525 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:12.525 fio-3.35 00:14:12.525 Starting 4 threads 00:14:13.901 00:14:13.901 job0: (groupid=0, jobs=1): err= 0: pid=1690135: Wed Apr 24 19:44:55 2024 00:14:13.901 read: IOPS=506, BW=2027KiB/s (2076kB/s)(2088KiB/1030msec) 00:14:13.901 slat (nsec): min=6417, max=52412, avg=17692.39, stdev=7480.35 00:14:13.901 clat (usec): min=370, max=41050, avg=1291.87, stdev=5552.73 00:14:13.901 lat (usec): min=385, max=41058, avg=1309.57, stdev=5552.88 00:14:13.901 clat percentiles (usec): 00:14:13.901 | 1.00th=[ 383], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 420], 00:14:13.901 | 30.00th=[ 482], 40.00th=[ 519], 50.00th=[ 537], 60.00th=[ 553], 00:14:13.901 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 603], 95.00th=[ 627], 00:14:13.901 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:13.901 | 99.99th=[41157] 00:14:13.901 write: IOPS=994, BW=3977KiB/s (4072kB/s)(4096KiB/1030msec); 0 zone resets 00:14:13.901 slat (nsec): min=6908, max=71390, avg=17773.22, stdev=9091.48 00:14:13.901 clat (usec): min=245, max=447, avg=312.19, stdev=35.78 00:14:13.901 lat (usec): min=253, max=460, avg=329.96, stdev=39.37 00:14:13.901 clat percentiles (usec): 00:14:13.901 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 281], 00:14:13.901 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 310], 60.00th=[ 318], 00:14:13.901 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 375], 00:14:13.901 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 420], 99.95th=[ 449], 00:14:13.901 | 99.99th=[ 449] 00:14:13.901 bw ( KiB/s): min= 2880, max= 5312, per=34.40%, avg=4096.00, stdev=1719.68, samples=2 00:14:13.901 iops : min= 720, max= 1328, avg=1024.00, stdev=429.92, samples=2 00:14:13.901 lat (usec) : 250=0.78%, 500=76.46%, 750=22.12% 00:14:13.901 lat (msec) : 50=0.65% 00:14:13.901 cpu : usr=2.43%, sys=2.53%, ctx=1547, majf=0, minf=1 00:14:13.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.902 issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.902 job1: (groupid=0, jobs=1): err= 0: pid=1690136: Wed Apr 24 19:44:55 2024 00:14:13.902 read: IOPS=786, BW=3145KiB/s (3220kB/s)(3148KiB/1001msec) 00:14:13.902 slat (nsec): min=5132, max=57200, avg=13257.83, stdev=8964.24 00:14:13.902 clat (usec): min=296, max=42464, avg=842.81, stdev=4214.74 00:14:13.902 lat (usec): min=308, max=42472, avg=856.06, stdev=4215.44 00:14:13.902 clat percentiles (usec): 00:14:13.902 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 322], 00:14:13.902 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 371], 00:14:13.902 | 70.00th=[ 412], 80.00th=[ 490], 90.00th=[ 570], 95.00th=[ 635], 00:14:13.902 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:13.902 | 99.99th=[42206] 00:14:13.902 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:13.902 slat (nsec): min=6750, max=68757, avg=15990.32, stdev=10145.00 00:14:13.902 clat (usec): min=190, max=570, avg=295.81, stdev=78.36 00:14:13.902 lat (usec): min=198, max=588, avg=311.80, stdev=83.20 00:14:13.902 clat percentiles (usec): 00:14:13.902 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:14:13.902 | 30.00th=[ 221], 40.00th=[ 273], 50.00th=[ 297], 60.00th=[ 322], 00:14:13.902 | 70.00th=[ 343], 80.00th=[ 363], 90.00th=[ 396], 95.00th=[ 429], 00:14:13.902 | 99.00th=[ 486], 99.50th=[ 506], 99.90th=[ 562], 99.95th=[ 570], 00:14:13.902 | 99.99th=[ 570] 00:14:13.902 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.902 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.902 lat (usec) : 250=20.43%, 500=71.84%, 750=7.12%, 1000=0.11% 00:14:13.902 lat (msec) : 20=0.06%, 50=0.44% 00:14:13.902 cpu : usr=1.40%, sys=2.80%, ctx=1812, majf=0, minf=1 00:14:13.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.902 issued rwts: total=787,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.902 job2: (groupid=0, jobs=1): err= 0: pid=1690137: Wed Apr 24 19:44:55 2024 00:14:13.902 read: IOPS=108, BW=436KiB/s (446kB/s)(436KiB/1001msec) 00:14:13.902 slat (nsec): min=5520, max=46969, avg=19431.01, stdev=11594.01 00:14:13.902 clat (usec): min=444, max=41426, avg=7372.35, stdev=15055.75 00:14:13.902 lat (usec): min=463, max=41432, avg=7391.78, stdev=15055.36 00:14:13.902 clat percentiles (usec): 00:14:13.902 | 1.00th=[ 465], 5.00th=[ 494], 10.00th=[ 545], 20.00th=[ 586], 00:14:13.902 | 30.00th=[ 611], 40.00th=[ 635], 50.00th=[ 644], 60.00th=[ 644], 00:14:13.902 | 70.00th=[ 660], 80.00th=[ 775], 90.00th=[41157], 95.00th=[41157], 00:14:13.902 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:13.902 | 99.99th=[41681] 00:14:13.902 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:14:13.902 slat (nsec): min=6553, max=65865, avg=18734.13, stdev=9267.42 00:14:13.902 clat (usec): min=242, max=712, avg=355.20, stdev=55.63 00:14:13.902 lat (usec): min=250, max=720, avg=373.93, stdev=57.63 00:14:13.902 clat percentiles (usec): 00:14:13.902 | 1.00th=[ 258], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 310], 00:14:13.902 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 359], 00:14:13.902 | 70.00th=[ 375], 80.00th=[ 396], 90.00th=[ 433], 95.00th=[ 449], 00:14:13.902 | 99.00th=[ 510], 99.50th=[ 578], 99.90th=[ 709], 99.95th=[ 709], 00:14:13.902 | 99.99th=[ 709] 00:14:13.902 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.902 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.902 lat (usec) : 250=0.64%, 500=81.64%, 750=14.17%, 1000=0.48% 00:14:13.902 lat (msec) : 10=0.16%, 50=2.90% 00:14:13.902 cpu : usr=0.50%, sys=1.30%, ctx=621, majf=0, minf=2 00:14:13.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.902 issued rwts: total=109,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.902 job3: (groupid=0, jobs=1): err= 0: pid=1690138: Wed Apr 24 19:44:55 2024 00:14:13.902 read: IOPS=71, BW=287KiB/s (294kB/s)(296KiB/1032msec) 00:14:13.902 slat (nsec): min=5665, max=34387, avg=10397.34, stdev=6993.21 00:14:13.902 clat (usec): min=383, max=41623, avg=11540.60, stdev=18073.95 00:14:13.902 lat (usec): min=399, max=41639, avg=11551.00, stdev=18079.29 00:14:13.902 clat percentiles (usec): 00:14:13.902 | 1.00th=[ 383], 5.00th=[ 465], 10.00th=[ 553], 20.00th=[ 578], 00:14:13.902 | 30.00th=[ 627], 40.00th=[ 644], 50.00th=[ 644], 60.00th=[ 652], 00:14:13.902 | 70.00th=[ 783], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:13.902 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:14:13.902 | 99.99th=[41681] 00:14:13.902 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:14:13.902 slat (nsec): min=7815, max=61478, avg=21117.93, stdev=11130.90 00:14:13.902 clat (usec): min=250, max=890, avg=317.50, stdev=49.67 00:14:13.902 lat (usec): min=259, max=901, avg=338.62, stdev=52.15 00:14:13.902 clat percentiles (usec): 00:14:13.902 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 285], 00:14:13.902 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:14:13.902 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 383], 00:14:13.902 | 99.00th=[ 486], 99.50th=[ 578], 99.90th=[ 889], 99.95th=[ 889], 00:14:13.902 | 99.99th=[ 889] 00:14:13.902 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:14:13.902 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:13.902 lat (usec) : 500=87.37%, 750=8.53%, 1000=0.68% 00:14:13.902 lat (msec) : 50=3.41% 00:14:13.902 cpu : usr=0.58%, sys=1.07%, ctx=587, majf=0, minf=1 00:14:13.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:13.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:13.902 issued rwts: total=74,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:13.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:13.902 00:14:13.902 Run status group 0 (all jobs): 00:14:13.902 READ: bw=5783KiB/s (5922kB/s), 287KiB/s-3145KiB/s (294kB/s-3220kB/s), io=5968KiB (6111kB), run=1001-1032msec 00:14:13.902 WRITE: bw=11.6MiB/s (12.2MB/s), 1984KiB/s-4092KiB/s (2032kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1032msec 00:14:13.902 00:14:13.902 Disk stats (read/write): 00:14:13.902 nvme0n1: ios=542/1024, merge=0/0, ticks=1469/306, in_queue=1775, util=98.20% 00:14:13.902 nvme0n2: ios=554/907, merge=0/0, ticks=791/268, in_queue=1059, util=98.27% 00:14:13.902 nvme0n3: ios=96/512, merge=0/0, ticks=683/174, in_queue=857, util=91.04% 00:14:13.902 nvme0n4: ios=96/512, merge=0/0, ticks=1596/151, in_queue=1747, util=98.11% 00:14:13.902 19:44:55 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:13.902 [global] 00:14:13.902 thread=1 00:14:13.902 invalidate=1 00:14:13.902 rw=write 00:14:13.902 time_based=1 00:14:13.902 runtime=1 00:14:13.902 ioengine=libaio 00:14:13.902 direct=1 00:14:13.902 bs=4096 00:14:13.902 iodepth=128 00:14:13.902 norandommap=0 00:14:13.902 numjobs=1 00:14:13.902 00:14:13.902 verify_dump=1 00:14:13.902 verify_backlog=512 00:14:13.902 verify_state_save=0 00:14:13.902 do_verify=1 00:14:13.902 verify=crc32c-intel 00:14:13.902 [job0] 00:14:13.902 filename=/dev/nvme0n1 00:14:13.902 [job1] 00:14:13.902 filename=/dev/nvme0n2 00:14:13.902 [job2] 00:14:13.902 filename=/dev/nvme0n3 00:14:13.902 [job3] 00:14:13.902 filename=/dev/nvme0n4 00:14:13.902 Could not set queue depth (nvme0n1) 00:14:13.902 Could not set queue depth (nvme0n2) 00:14:13.902 Could not set queue depth (nvme0n3) 00:14:13.902 Could not set queue depth (nvme0n4) 00:14:14.200 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.200 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.200 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.200 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:14.200 fio-3.35 00:14:14.200 Starting 4 threads 00:14:15.578 00:14:15.578 job0: (groupid=0, jobs=1): err= 0: pid=1690367: Wed Apr 24 19:44:56 2024 00:14:15.578 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:14:15.578 slat (usec): min=2, max=16916, avg=132.85, stdev=858.24 00:14:15.578 clat (usec): min=6974, max=41273, avg=17299.49, stdev=5574.93 00:14:15.578 lat (usec): min=6980, max=41292, avg=17432.33, stdev=5632.88 00:14:15.578 clat percentiles (usec): 00:14:15.578 | 1.00th=[ 7177], 5.00th=[10552], 10.00th=[11076], 20.00th=[12780], 00:14:15.578 | 30.00th=[14353], 40.00th=[15139], 50.00th=[16188], 60.00th=[17695], 00:14:15.578 | 70.00th=[19006], 80.00th=[21627], 90.00th=[26084], 95.00th=[28443], 00:14:15.578 | 99.00th=[32375], 99.50th=[36439], 99.90th=[36439], 99.95th=[38011], 00:14:15.578 | 99.99th=[41157] 00:14:15.578 write: IOPS=4040, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1008msec); 0 zone resets 00:14:15.578 slat (usec): min=3, max=13040, avg=116.68, stdev=744.07 00:14:15.578 clat (usec): min=3705, max=41689, avg=15991.46, stdev=5145.44 00:14:15.578 lat (usec): min=4225, max=41710, avg=16108.14, stdev=5190.82 00:14:15.578 clat percentiles (usec): 00:14:15.578 | 1.00th=[ 8586], 5.00th=[10028], 10.00th=[10159], 20.00th=[11469], 00:14:15.578 | 30.00th=[12518], 40.00th=[14222], 50.00th=[15270], 60.00th=[16188], 00:14:15.578 | 70.00th=[17695], 80.00th=[19268], 90.00th=[22152], 95.00th=[26084], 00:14:15.578 | 99.00th=[32375], 99.50th=[32637], 99.90th=[34341], 99.95th=[38011], 00:14:15.578 | 99.99th=[41681] 00:14:15.578 bw ( KiB/s): min=14848, max=16678, per=25.56%, avg=15763.00, stdev=1294.01, samples=2 00:14:15.578 iops : min= 3712, max= 4169, avg=3940.50, stdev=323.15, samples=2 00:14:15.578 lat (msec) : 4=0.01%, 10=3.98%, 20=74.43%, 50=21.58% 00:14:15.578 cpu : usr=3.18%, sys=5.46%, ctx=314, majf=0, minf=1 00:14:15.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:15.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.578 issued rwts: total=3584,4073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.578 job1: (groupid=0, jobs=1): err= 0: pid=1690368: Wed Apr 24 19:44:56 2024 00:14:15.578 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:14:15.578 slat (usec): min=2, max=14107, avg=98.79, stdev=726.94 00:14:15.578 clat (usec): min=4985, max=27651, avg=13391.10, stdev=3139.54 00:14:15.578 lat (usec): min=5530, max=27686, avg=13489.89, stdev=3202.38 00:14:15.578 clat percentiles (usec): 00:14:15.578 | 1.00th=[ 7373], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[11076], 00:14:15.578 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13435], 60.00th=[13960], 00:14:15.578 | 70.00th=[14484], 80.00th=[16319], 90.00th=[17433], 95.00th=[18744], 00:14:15.578 | 99.00th=[22152], 99.50th=[23462], 99.90th=[25297], 99.95th=[25297], 00:14:15.578 | 99.99th=[27657] 00:14:15.578 write: IOPS=4940, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1005msec); 0 zone resets 00:14:15.578 slat (usec): min=3, max=13325, avg=97.80, stdev=732.06 00:14:15.578 clat (usec): min=833, max=32833, avg=13250.78, stdev=4988.22 00:14:15.578 lat (usec): min=842, max=32840, avg=13348.58, stdev=5035.41 00:14:15.578 clat percentiles (usec): 00:14:15.578 | 1.00th=[ 3326], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 8029], 00:14:15.578 | 30.00th=[ 9634], 40.00th=[11731], 50.00th=[14091], 60.00th=[15139], 00:14:15.578 | 70.00th=[16450], 80.00th=[17695], 90.00th=[19268], 95.00th=[20841], 00:14:15.578 | 99.00th=[25560], 99.50th=[25822], 99.90th=[30278], 99.95th=[30278], 00:14:15.578 | 99.99th=[32900] 00:14:15.578 bw ( KiB/s): min=18216, max=20439, per=31.34%, avg=19327.50, stdev=1571.90, samples=2 00:14:15.578 iops : min= 4554, max= 5109, avg=4831.50, stdev=392.44, samples=2 00:14:15.578 lat (usec) : 1000=0.04% 00:14:15.578 lat (msec) : 2=0.22%, 4=0.33%, 10=22.48%, 20=72.33%, 50=4.60% 00:14:15.578 cpu : usr=2.59%, sys=5.58%, ctx=298, majf=0, minf=1 00:14:15.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:15.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.578 issued rwts: total=4608,4965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.578 job2: (groupid=0, jobs=1): err= 0: pid=1690369: Wed Apr 24 19:44:56 2024 00:14:15.578 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:14:15.578 slat (usec): min=2, max=24326, avg=165.97, stdev=1266.52 00:14:15.578 clat (usec): min=8779, max=93611, avg=20896.17, stdev=18264.22 00:14:15.578 lat (usec): min=8786, max=93618, avg=21062.14, stdev=18370.39 00:14:15.578 clat percentiles (usec): 00:14:15.578 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[11076], 20.00th=[11863], 00:14:15.578 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13304], 60.00th=[15008], 00:14:15.578 | 70.00th=[20055], 80.00th=[24249], 90.00th=[33424], 95.00th=[74974], 00:14:15.578 | 99.00th=[93848], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:14:15.578 | 99.99th=[93848] 00:14:15.578 write: IOPS=3515, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1004msec); 0 zone resets 00:14:15.578 slat (usec): min=3, max=15559, avg=133.56, stdev=801.03 00:14:15.578 clat (usec): min=360, max=61043, avg=17495.58, stdev=10588.21 00:14:15.578 lat (usec): min=4062, max=61055, avg=17629.14, stdev=10665.60 00:14:15.578 clat percentiles (usec): 00:14:15.578 | 1.00th=[ 4490], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11338], 00:14:15.578 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[14484], 00:14:15.578 | 70.00th=[19006], 80.00th=[22676], 90.00th=[27132], 95.00th=[45351], 00:14:15.578 | 99.00th=[57934], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:14:15.578 | 99.99th=[61080] 00:14:15.578 bw ( KiB/s): min= 8192, max=19024, per=22.06%, avg=13608.00, stdev=7659.38, samples=2 00:14:15.578 iops : min= 2048, max= 4756, avg=3402.00, stdev=1914.85, samples=2 00:14:15.578 lat (usec) : 500=0.02% 00:14:15.578 lat (msec) : 10=4.97%, 20=65.98%, 50=23.92%, 100=5.12% 00:14:15.578 cpu : usr=3.49%, sys=3.89%, ctx=357, majf=0, minf=1 00:14:15.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:14:15.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.578 issued rwts: total=3072,3530,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.578 job3: (groupid=0, jobs=1): err= 0: pid=1690370: Wed Apr 24 19:44:56 2024 00:14:15.578 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:14:15.578 slat (usec): min=3, max=14146, avg=188.64, stdev=1109.52 00:14:15.578 clat (usec): min=5129, max=82176, avg=24372.23, stdev=16591.94 00:14:15.578 lat (usec): min=5137, max=82181, avg=24560.87, stdev=16690.89 00:14:15.578 clat percentiles (usec): 00:14:15.578 | 1.00th=[ 9634], 5.00th=[11076], 10.00th=[11469], 20.00th=[13435], 00:14:15.578 | 30.00th=[14091], 40.00th=[15664], 50.00th=[17695], 60.00th=[20841], 00:14:15.578 | 70.00th=[27919], 80.00th=[29754], 90.00th=[45876], 95.00th=[67634], 00:14:15.578 | 99.00th=[81265], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:14:15.578 | 99.99th=[82314] 00:14:15.578 write: IOPS=2951, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1008msec); 0 zone resets 00:14:15.578 slat (usec): min=4, max=21626, avg=166.62, stdev=1142.00 00:14:15.578 clat (usec): min=403, max=68620, avg=21993.68, stdev=14271.24 00:14:15.578 lat (usec): min=3903, max=68628, avg=22160.31, stdev=14342.51 00:14:15.578 clat percentiles (usec): 00:14:15.578 | 1.00th=[ 5342], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[11338], 00:14:15.578 | 30.00th=[13829], 40.00th=[15664], 50.00th=[19006], 60.00th=[21627], 00:14:15.578 | 70.00th=[22938], 80.00th=[25297], 90.00th=[49546], 95.00th=[60556], 00:14:15.578 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:14:15.578 | 99.99th=[68682] 00:14:15.578 bw ( KiB/s): min= 6392, max=16384, per=18.46%, avg=11388.00, stdev=7065.41, samples=2 00:14:15.578 iops : min= 1598, max= 4096, avg=2847.00, stdev=1766.35, samples=2 00:14:15.578 lat (usec) : 500=0.02% 00:14:15.578 lat (msec) : 4=0.13%, 10=6.32%, 20=49.05%, 50=34.81%, 100=9.67% 00:14:15.578 cpu : usr=3.48%, sys=5.16%, ctx=249, majf=0, minf=1 00:14:15.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:15.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:15.578 issued rwts: total=2560,2975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:15.578 00:14:15.578 Run status group 0 (all jobs): 00:14:15.578 READ: bw=53.6MiB/s (56.2MB/s), 9.92MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=54.0MiB (56.6MB), run=1004-1008msec 00:14:15.578 WRITE: bw=60.2MiB/s (63.2MB/s), 11.5MiB/s-19.3MiB/s (12.1MB/s-20.2MB/s), io=60.7MiB (63.7MB), run=1004-1008msec 00:14:15.578 00:14:15.578 Disk stats (read/write): 00:14:15.578 nvme0n1: ios=3122/3217, merge=0/0, ticks=22333/20425, in_queue=42758, util=86.87% 00:14:15.578 nvme0n2: ios=4115/4271, merge=0/0, ticks=45661/45915, in_queue=91576, util=97.76% 00:14:15.578 nvme0n3: ios=2391/2560, merge=0/0, ticks=18951/15366, in_queue=34317, util=97.28% 00:14:15.578 nvme0n4: ios=2611/2622, merge=0/0, ticks=34117/34704, in_queue=68821, util=96.11% 00:14:15.578 19:44:56 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:15.578 [global] 00:14:15.578 thread=1 00:14:15.578 invalidate=1 00:14:15.578 rw=randwrite 00:14:15.578 time_based=1 00:14:15.578 runtime=1 00:14:15.578 ioengine=libaio 00:14:15.578 direct=1 00:14:15.578 bs=4096 00:14:15.578 iodepth=128 00:14:15.578 norandommap=0 00:14:15.578 numjobs=1 00:14:15.578 00:14:15.578 verify_dump=1 00:14:15.578 verify_backlog=512 00:14:15.578 verify_state_save=0 00:14:15.578 do_verify=1 00:14:15.578 verify=crc32c-intel 00:14:15.578 [job0] 00:14:15.578 filename=/dev/nvme0n1 00:14:15.578 [job1] 00:14:15.578 filename=/dev/nvme0n2 00:14:15.578 [job2] 00:14:15.578 filename=/dev/nvme0n3 00:14:15.578 [job3] 00:14:15.579 filename=/dev/nvme0n4 00:14:15.579 Could not set queue depth (nvme0n1) 00:14:15.579 Could not set queue depth (nvme0n2) 00:14:15.579 Could not set queue depth (nvme0n3) 00:14:15.579 Could not set queue depth (nvme0n4) 00:14:15.579 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.579 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.579 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.579 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:15.579 fio-3.35 00:14:15.579 Starting 4 threads 00:14:16.956 00:14:16.956 job0: (groupid=0, jobs=1): err= 0: pid=1690601: Wed Apr 24 19:44:58 2024 00:14:16.956 read: IOPS=4376, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1002msec) 00:14:16.956 slat (usec): min=3, max=12861, avg=125.62, stdev=716.52 00:14:16.956 clat (usec): min=1020, max=57154, avg=16284.79, stdev=10638.26 00:14:16.956 lat (usec): min=4238, max=57161, avg=16410.41, stdev=10691.76 00:14:16.956 clat percentiles (usec): 00:14:16.956 | 1.00th=[ 7308], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10945], 00:14:16.956 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:14:16.956 | 70.00th=[13042], 80.00th=[16319], 90.00th=[36963], 95.00th=[45351], 00:14:16.956 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:14:16.956 | 99.99th=[57410] 00:14:16.956 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:14:16.956 slat (usec): min=4, max=7306, avg=87.10, stdev=406.89 00:14:16.956 clat (usec): min=7255, max=41893, avg=12004.53, stdev=4215.29 00:14:16.957 lat (usec): min=7272, max=41900, avg=12091.63, stdev=4222.81 00:14:16.957 clat percentiles (usec): 00:14:16.957 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10290], 00:14:16.957 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:14:16.957 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12780], 95.00th=[14746], 00:14:16.957 | 99.00th=[35914], 99.50th=[37487], 99.90th=[41681], 99.95th=[41681], 00:14:16.957 | 99.99th=[41681] 00:14:16.957 bw ( KiB/s): min=16384, max=20480, per=31.11%, avg=18432.00, stdev=2896.31, samples=2 00:14:16.957 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:14:16.957 lat (msec) : 2=0.01%, 10=10.78%, 20=79.05%, 50=9.36%, 100=0.80% 00:14:16.957 cpu : usr=7.29%, sys=8.39%, ctx=486, majf=0, minf=11 00:14:16.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.957 issued rwts: total=4385,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.957 job1: (groupid=0, jobs=1): err= 0: pid=1690602: Wed Apr 24 19:44:58 2024 00:14:16.957 read: IOPS=2630, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1012msec) 00:14:16.957 slat (usec): min=3, max=22100, avg=206.32, stdev=1417.85 00:14:16.957 clat (msec): min=5, max=118, avg=22.82, stdev=15.09 00:14:16.957 lat (msec): min=5, max=118, avg=23.03, stdev=15.23 00:14:16.957 clat percentiles (msec): 00:14:16.957 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 12], 00:14:16.957 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 20], 60.00th=[ 22], 00:14:16.957 | 70.00th=[ 23], 80.00th=[ 31], 90.00th=[ 39], 95.00th=[ 46], 00:14:16.957 | 99.00th=[ 99], 99.50th=[ 106], 99.90th=[ 120], 99.95th=[ 120], 00:14:16.957 | 99.99th=[ 120] 00:14:16.957 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:14:16.957 slat (usec): min=4, max=15147, avg=136.99, stdev=707.31 00:14:16.957 clat (msec): min=4, max=118, avg=22.02, stdev=15.06 00:14:16.957 lat (msec): min=4, max=118, avg=22.16, stdev=15.11 00:14:16.957 clat percentiles (msec): 00:14:16.957 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 12], 00:14:16.957 | 30.00th=[ 16], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 21], 00:14:16.957 | 70.00th=[ 23], 80.00th=[ 28], 90.00th=[ 33], 95.00th=[ 42], 00:14:16.957 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 115], 99.95th=[ 120], 00:14:16.957 | 99.99th=[ 120] 00:14:16.957 bw ( KiB/s): min=11392, max=12976, per=20.56%, avg=12184.00, stdev=1120.06, samples=2 00:14:16.957 iops : min= 2848, max= 3244, avg=3046.00, stdev=280.01, samples=2 00:14:16.957 lat (msec) : 10=7.17%, 20=42.55%, 50=46.67%, 100=2.55%, 250=1.06% 00:14:16.957 cpu : usr=3.17%, sys=7.32%, ctx=335, majf=0, minf=9 00:14:16.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.957 issued rwts: total=2662,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.957 job2: (groupid=0, jobs=1): err= 0: pid=1690611: Wed Apr 24 19:44:58 2024 00:14:16.957 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:14:16.957 slat (usec): min=2, max=16704, avg=105.23, stdev=755.52 00:14:16.957 clat (usec): min=4887, max=33790, avg=13819.13, stdev=2900.45 00:14:16.957 lat (usec): min=4892, max=33824, avg=13924.36, stdev=2969.12 00:14:16.957 clat percentiles (usec): 00:14:16.957 | 1.00th=[ 6718], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[12125], 00:14:16.957 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13304], 60.00th=[13698], 00:14:16.957 | 70.00th=[14484], 80.00th=[15926], 90.00th=[17433], 95.00th=[18220], 00:14:16.957 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:14:16.957 | 99.99th=[33817] 00:14:16.957 write: IOPS=4201, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1009msec); 0 zone resets 00:14:16.957 slat (usec): min=3, max=36781, avg=127.25, stdev=1188.96 00:14:16.957 clat (usec): min=687, max=89230, avg=16437.14, stdev=12086.91 00:14:16.957 lat (usec): min=4249, max=89250, avg=16564.39, stdev=12191.82 00:14:16.957 clat percentiles (usec): 00:14:16.957 | 1.00th=[ 6259], 5.00th=[ 7963], 10.00th=[ 8586], 20.00th=[10028], 00:14:16.957 | 30.00th=[11469], 40.00th=[12649], 50.00th=[13304], 60.00th=[13829], 00:14:16.957 | 70.00th=[14615], 80.00th=[16712], 90.00th=[28705], 95.00th=[45876], 00:14:16.957 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:14:16.957 | 99.99th=[89654] 00:14:16.957 bw ( KiB/s): min=14400, max=18552, per=27.81%, avg=16476.00, stdev=2935.91, samples=2 00:14:16.957 iops : min= 3600, max= 4638, avg=4119.00, stdev=733.98, samples=2 00:14:16.957 lat (usec) : 750=0.01% 00:14:16.957 lat (msec) : 10=12.98%, 20=79.77%, 50=4.92%, 100=2.32% 00:14:16.957 cpu : usr=3.87%, sys=6.25%, ctx=226, majf=0, minf=15 00:14:16.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.957 issued rwts: total=4096,4239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.957 job3: (groupid=0, jobs=1): err= 0: pid=1690618: Wed Apr 24 19:44:58 2024 00:14:16.957 read: IOPS=2654, BW=10.4MiB/s (10.9MB/s)(10.5MiB/1012msec) 00:14:16.957 slat (usec): min=3, max=27006, avg=167.18, stdev=1273.79 00:14:16.957 clat (usec): min=7055, max=48143, avg=20928.58, stdev=5449.91 00:14:16.957 lat (usec): min=8221, max=48161, avg=21095.76, stdev=5537.15 00:14:16.957 clat percentiles (usec): 00:14:16.957 | 1.00th=[10683], 5.00th=[12911], 10.00th=[13042], 20.00th=[16450], 00:14:16.957 | 30.00th=[19006], 40.00th=[20055], 50.00th=[20841], 60.00th=[21627], 00:14:16.957 | 70.00th=[22152], 80.00th=[24511], 90.00th=[27657], 95.00th=[30802], 00:14:16.957 | 99.00th=[35390], 99.50th=[37487], 99.90th=[38536], 99.95th=[39060], 00:14:16.957 | 99.99th=[47973] 00:14:16.957 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:14:16.957 slat (usec): min=4, max=15716, avg=171.67, stdev=874.02 00:14:16.957 clat (usec): min=4425, max=92793, avg=23460.38, stdev=14145.53 00:14:16.957 lat (usec): min=4435, max=92812, avg=23632.05, stdev=14236.61 00:14:16.957 clat percentiles (usec): 00:14:16.957 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[10945], 20.00th=[15664], 00:14:16.957 | 30.00th=[18482], 40.00th=[19792], 50.00th=[20055], 60.00th=[21365], 00:14:16.957 | 70.00th=[22676], 80.00th=[27132], 90.00th=[41681], 95.00th=[52167], 00:14:16.957 | 99.00th=[86508], 99.50th=[89654], 99.90th=[92799], 99.95th=[92799], 00:14:16.957 | 99.99th=[92799] 00:14:16.957 bw ( KiB/s): min=10096, max=14464, per=20.72%, avg=12280.00, stdev=3088.64, samples=2 00:14:16.957 iops : min= 2524, max= 3616, avg=3070.00, stdev=772.16, samples=2 00:14:16.957 lat (msec) : 10=4.91%, 20=37.39%, 50=55.00%, 100=2.69% 00:14:16.957 cpu : usr=3.46%, sys=5.84%, ctx=341, majf=0, minf=15 00:14:16.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:16.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:16.957 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:16.957 00:14:16.957 Run status group 0 (all jobs): 00:14:16.957 READ: bw=53.4MiB/s (56.0MB/s), 10.3MiB/s-17.1MiB/s (10.8MB/s-17.9MB/s), io=54.0MiB (56.6MB), run=1002-1012msec 00:14:16.957 WRITE: bw=57.9MiB/s (60.7MB/s), 11.9MiB/s-18.0MiB/s (12.4MB/s-18.8MB/s), io=58.6MiB (61.4MB), run=1002-1012msec 00:14:16.957 00:14:16.957 Disk stats (read/write): 00:14:16.957 nvme0n1: ios=3920/4096, merge=0/0, ticks=15068/10148, in_queue=25216, util=87.17% 00:14:16.957 nvme0n2: ios=2072/2559, merge=0/0, ticks=45852/56519, in_queue=102371, util=97.76% 00:14:16.957 nvme0n3: ios=3271/3584, merge=0/0, ticks=32654/37665, in_queue=70319, util=98.33% 00:14:16.957 nvme0n4: ios=2213/2560, merge=0/0, ticks=43381/60773, in_queue=104154, util=98.73% 00:14:16.957 19:44:58 -- target/fio.sh@55 -- # sync 00:14:16.957 19:44:58 -- target/fio.sh@59 -- # fio_pid=1690769 00:14:16.957 19:44:58 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:16.957 19:44:58 -- target/fio.sh@61 -- # sleep 3 00:14:16.958 [global] 00:14:16.958 thread=1 00:14:16.958 invalidate=1 00:14:16.958 rw=read 00:14:16.958 time_based=1 00:14:16.958 runtime=10 00:14:16.958 ioengine=libaio 00:14:16.958 direct=1 00:14:16.958 bs=4096 00:14:16.958 iodepth=1 00:14:16.958 norandommap=1 00:14:16.958 numjobs=1 00:14:16.958 00:14:16.958 [job0] 00:14:16.958 filename=/dev/nvme0n1 00:14:16.958 [job1] 00:14:16.958 filename=/dev/nvme0n2 00:14:16.958 [job2] 00:14:16.958 filename=/dev/nvme0n3 00:14:16.958 [job3] 00:14:16.958 filename=/dev/nvme0n4 00:14:16.958 Could not set queue depth (nvme0n1) 00:14:16.958 Could not set queue depth (nvme0n2) 00:14:16.958 Could not set queue depth (nvme0n3) 00:14:16.958 Could not set queue depth (nvme0n4) 00:14:16.958 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.958 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.958 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.958 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:16.958 fio-3.35 00:14:16.958 Starting 4 threads 00:14:20.242 19:45:01 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:20.242 19:45:01 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:20.242 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4702208, buflen=4096 00:14:20.242 fio: pid=1690951, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:20.242 19:45:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.242 19:45:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:20.242 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1466368, buflen=4096 00:14:20.242 fio: pid=1690950, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:20.500 19:45:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.500 19:45:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:20.500 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=12836864, buflen=4096 00:14:20.500 fio: pid=1690948, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:20.758 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=38559744, buflen=4096 00:14:20.758 fio: pid=1690949, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:20.758 19:45:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:20.758 19:45:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:20.758 00:14:20.758 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1690948: Wed Apr 24 19:45:02 2024 00:14:20.758 read: IOPS=907, BW=3628KiB/s (3715kB/s)(12.2MiB/3455msec) 00:14:20.758 slat (usec): min=5, max=19825, avg=23.18, stdev=431.10 00:14:20.758 clat (usec): min=317, max=41670, avg=1068.77, stdev=5105.33 00:14:20.758 lat (usec): min=326, max=54962, avg=1091.95, stdev=5157.72 00:14:20.758 clat percentiles (usec): 00:14:20.758 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:14:20.758 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 412], 60.00th=[ 429], 00:14:20.758 | 70.00th=[ 449], 80.00th=[ 461], 90.00th=[ 498], 95.00th=[ 519], 00:14:20.758 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:14:20.758 | 99.99th=[41681] 00:14:20.758 bw ( KiB/s): min= 96, max= 9088, per=26.42%, avg=4033.33, stdev=3367.79, samples=6 00:14:20.758 iops : min= 24, max= 2272, avg=1008.33, stdev=841.95, samples=6 00:14:20.758 lat (usec) : 500=90.49%, 750=7.72%, 1000=0.03% 00:14:20.758 lat (msec) : 2=0.03%, 4=0.06%, 50=1.63% 00:14:20.758 cpu : usr=0.72%, sys=1.82%, ctx=3137, majf=0, minf=1 00:14:20.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.758 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.758 issued rwts: total=3135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.758 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1690949: Wed Apr 24 19:45:02 2024 00:14:20.758 read: IOPS=2556, BW=9.98MiB/s (10.5MB/s)(36.8MiB/3683msec) 00:14:20.758 slat (usec): min=5, max=26394, avg=19.63, stdev=407.19 00:14:20.758 clat (usec): min=308, max=3660, avg=366.49, stdev=68.84 00:14:20.758 lat (usec): min=316, max=26811, avg=386.12, stdev=415.45 00:14:20.758 clat percentiles (usec): 00:14:20.758 | 1.00th=[ 322], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 334], 00:14:20.758 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 355], 00:14:20.758 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 441], 95.00th=[ 465], 00:14:20.758 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 979], 99.95th=[ 1598], 00:14:20.758 | 99.99th=[ 3654] 00:14:20.758 bw ( KiB/s): min= 9304, max=11400, per=67.47%, avg=10298.57, stdev=744.93, samples=7 00:14:20.758 iops : min= 2326, max= 2850, avg=2574.57, stdev=186.33, samples=7 00:14:20.758 lat (usec) : 500=98.63%, 750=1.19%, 1000=0.07% 00:14:20.758 lat (msec) : 2=0.06%, 4=0.03% 00:14:20.758 cpu : usr=1.55%, sys=4.05%, ctx=9421, majf=0, minf=1 00:14:20.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.758 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.758 issued rwts: total=9415,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.758 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1690950: Wed Apr 24 19:45:02 2024 00:14:20.758 read: IOPS=112, BW=448KiB/s (459kB/s)(1432KiB/3195msec) 00:14:20.758 slat (nsec): min=7090, max=44889, avg=11921.34, stdev=6378.35 00:14:20.758 clat (usec): min=388, max=41410, avg=8845.56, stdev=16432.13 00:14:20.758 lat (usec): min=396, max=41443, avg=8857.48, stdev=16435.24 00:14:20.758 clat percentiles (usec): 00:14:20.758 | 1.00th=[ 429], 5.00th=[ 437], 10.00th=[ 441], 20.00th=[ 445], 00:14:20.758 | 30.00th=[ 453], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 482], 00:14:20.758 | 70.00th=[ 510], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:20.758 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:20.758 | 99.99th=[41157] 00:14:20.758 bw ( KiB/s): min= 96, max= 1816, per=3.08%, avg=470.67, stdev=685.40, samples=6 00:14:20.758 iops : min= 24, max= 454, avg=117.67, stdev=171.35, samples=6 00:14:20.758 lat (usec) : 500=66.30%, 750=12.81% 00:14:20.758 lat (msec) : 50=20.61% 00:14:20.758 cpu : usr=0.13%, sys=0.13%, ctx=359, majf=0, minf=1 00:14:20.758 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.758 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.758 issued rwts: total=359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.758 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.758 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1690951: Wed Apr 24 19:45:02 2024 00:14:20.758 read: IOPS=393, BW=1572KiB/s (1610kB/s)(4592KiB/2921msec) 00:14:20.758 slat (nsec): min=6254, max=42019, avg=11725.27, stdev=5433.14 00:14:20.758 clat (usec): min=337, max=41495, avg=2521.17, stdev=8884.59 00:14:20.759 lat (usec): min=345, max=41528, avg=2532.89, stdev=8886.03 00:14:20.759 clat percentiles (usec): 00:14:20.759 | 1.00th=[ 347], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 424], 00:14:20.759 | 30.00th=[ 449], 40.00th=[ 465], 50.00th=[ 478], 60.00th=[ 494], 00:14:20.759 | 70.00th=[ 515], 80.00th=[ 553], 90.00th=[ 578], 95.00th=[40633], 00:14:20.759 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:14:20.759 | 99.99th=[41681] 00:14:20.759 bw ( KiB/s): min= 96, max= 4096, per=9.55%, avg=1457.60, stdev=1620.53, samples=5 00:14:20.759 iops : min= 24, max= 1024, avg=364.40, stdev=405.13, samples=5 00:14:20.759 lat (usec) : 500=63.88%, 750=30.90% 00:14:20.759 lat (msec) : 2=0.09%, 50=5.05% 00:14:20.759 cpu : usr=0.41%, sys=0.48%, ctx=1149, majf=0, minf=1 00:14:20.759 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:20.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.759 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:20.759 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:20.759 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:20.759 00:14:20.759 Run status group 0 (all jobs): 00:14:20.759 READ: bw=14.9MiB/s (15.6MB/s), 448KiB/s-9.98MiB/s (459kB/s-10.5MB/s), io=54.9MiB (57.6MB), run=2921-3683msec 00:14:20.759 00:14:20.759 Disk stats (read/write): 00:14:20.759 nvme0n1: ios=3132/0, merge=0/0, ticks=3221/0, in_queue=3221, util=95.08% 00:14:20.759 nvme0n2: ios=9250/0, merge=0/0, ticks=3196/0, in_queue=3196, util=94.29% 00:14:20.759 nvme0n3: ios=356/0, merge=0/0, ticks=3083/0, in_queue=3083, util=96.79% 00:14:20.759 nvme0n4: ios=1139/0, merge=0/0, ticks=2788/0, in_queue=2788, util=96.71% 00:14:21.016 19:45:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.016 19:45:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:21.274 19:45:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.274 19:45:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:21.531 19:45:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.531 19:45:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:21.788 19:45:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:21.788 19:45:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:22.047 19:45:03 -- target/fio.sh@69 -- # fio_status=0 00:14:22.047 19:45:03 -- target/fio.sh@70 -- # wait 1690769 00:14:22.047 19:45:03 -- target/fio.sh@70 -- # fio_status=4 00:14:22.047 19:45:03 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.304 19:45:03 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.304 19:45:03 -- common/autotest_common.sh@1205 -- # local i=0 00:14:22.304 19:45:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:22.304 19:45:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.304 19:45:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:22.304 19:45:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.305 19:45:03 -- common/autotest_common.sh@1217 -- # return 0 00:14:22.305 19:45:03 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:22.305 19:45:03 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:22.305 nvmf hotplug test: fio failed as expected 00:14:22.305 19:45:03 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.563 19:45:03 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:22.563 19:45:03 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:22.563 19:45:03 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:22.563 19:45:03 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:22.563 19:45:03 -- target/fio.sh@91 -- # nvmftestfini 00:14:22.563 19:45:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:22.563 19:45:03 -- nvmf/common.sh@117 -- # sync 00:14:22.563 19:45:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.563 19:45:03 -- nvmf/common.sh@120 -- # set +e 00:14:22.563 19:45:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.563 19:45:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.563 rmmod nvme_tcp 00:14:22.563 rmmod nvme_fabrics 00:14:22.563 rmmod nvme_keyring 00:14:22.563 19:45:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.563 19:45:03 -- nvmf/common.sh@124 -- # set -e 00:14:22.563 19:45:03 -- nvmf/common.sh@125 -- # return 0 00:14:22.563 19:45:03 -- nvmf/common.sh@478 -- # '[' -n 1688832 ']' 00:14:22.563 19:45:03 -- nvmf/common.sh@479 -- # killprocess 1688832 00:14:22.563 19:45:03 -- common/autotest_common.sh@936 -- # '[' -z 1688832 ']' 00:14:22.563 19:45:03 -- common/autotest_common.sh@940 -- # kill -0 1688832 00:14:22.563 19:45:03 -- common/autotest_common.sh@941 -- # uname 00:14:22.563 19:45:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.563 19:45:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1688832 00:14:22.563 19:45:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:22.563 19:45:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:22.563 19:45:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1688832' 00:14:22.563 killing process with pid 1688832 00:14:22.563 19:45:04 -- common/autotest_common.sh@955 -- # kill 1688832 00:14:22.563 19:45:04 -- common/autotest_common.sh@960 -- # wait 1688832 00:14:22.823 19:45:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:22.823 19:45:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:22.823 19:45:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:22.823 19:45:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.823 19:45:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.823 19:45:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.823 19:45:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.823 19:45:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.360 19:45:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:25.360 00:14:25.360 real 0m23.391s 00:14:25.360 user 1m21.938s 00:14:25.360 sys 0m6.248s 00:14:25.360 19:45:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:25.360 19:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:25.360 ************************************ 00:14:25.360 END TEST nvmf_fio_target 00:14:25.360 ************************************ 00:14:25.360 19:45:06 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:25.360 19:45:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:25.360 19:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.360 19:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:25.360 ************************************ 00:14:25.360 START TEST nvmf_bdevio 00:14:25.360 ************************************ 00:14:25.360 19:45:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:25.360 * Looking for test storage... 00:14:25.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.361 19:45:06 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.361 19:45:06 -- nvmf/common.sh@7 -- # uname -s 00:14:25.361 19:45:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.361 19:45:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.361 19:45:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.361 19:45:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.361 19:45:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.361 19:45:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.361 19:45:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.361 19:45:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.361 19:45:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.361 19:45:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.361 19:45:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.361 19:45:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.361 19:45:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.361 19:45:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.361 19:45:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.361 19:45:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.361 19:45:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.361 19:45:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.361 19:45:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.361 19:45:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.361 19:45:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.361 19:45:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.361 19:45:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.361 19:45:06 -- paths/export.sh@5 -- # export PATH 00:14:25.361 19:45:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.361 19:45:06 -- nvmf/common.sh@47 -- # : 0 00:14:25.361 19:45:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.361 19:45:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.361 19:45:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.361 19:45:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.361 19:45:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.361 19:45:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.361 19:45:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.361 19:45:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.361 19:45:06 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.361 19:45:06 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.361 19:45:06 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:25.361 19:45:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:25.361 19:45:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.361 19:45:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:25.361 19:45:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:25.361 19:45:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:25.361 19:45:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.361 19:45:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.361 19:45:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.361 19:45:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:25.361 19:45:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:25.361 19:45:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.361 19:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:27.260 19:45:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:27.260 19:45:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.260 19:45:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.260 19:45:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.260 19:45:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.260 19:45:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.260 19:45:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.261 19:45:08 -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.261 19:45:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.261 19:45:08 -- nvmf/common.sh@296 -- # e810=() 00:14:27.261 19:45:08 -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.261 19:45:08 -- nvmf/common.sh@297 -- # x722=() 00:14:27.261 19:45:08 -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.261 19:45:08 -- nvmf/common.sh@298 -- # mlx=() 00:14:27.261 19:45:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.261 19:45:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.261 19:45:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.261 19:45:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:27.261 19:45:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.261 19:45:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.261 19:45:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:27.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:27.261 19:45:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.261 19:45:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:27.261 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:27.261 19:45:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.261 19:45:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.261 19:45:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.261 19:45:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:27.261 19:45:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.261 19:45:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:27.261 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:27.261 19:45:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.261 19:45:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.261 19:45:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.261 19:45:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:27.261 19:45:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.261 19:45:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:27.261 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:27.261 19:45:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.261 19:45:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:27.261 19:45:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:27.261 19:45:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:27.261 19:45:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.261 19:45:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.261 19:45:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.261 19:45:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.261 19:45:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.261 19:45:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.261 19:45:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.261 19:45:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.261 19:45:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.261 19:45:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.261 19:45:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.261 19:45:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.261 19:45:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.261 19:45:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.261 19:45:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.261 19:45:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.261 19:45:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.261 19:45:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.261 19:45:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.261 19:45:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:14:27.261 00:14:27.261 --- 10.0.0.2 ping statistics --- 00:14:27.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.261 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:14:27.261 19:45:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:14:27.261 00:14:27.261 --- 10.0.0.1 ping statistics --- 00:14:27.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.261 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:14:27.261 19:45:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.261 19:45:08 -- nvmf/common.sh@411 -- # return 0 00:14:27.261 19:45:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:27.261 19:45:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.261 19:45:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:27.261 19:45:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.261 19:45:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:27.261 19:45:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:27.261 19:45:08 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:27.261 19:45:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:27.261 19:45:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:27.261 19:45:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.261 19:45:08 -- nvmf/common.sh@470 -- # nvmfpid=1694078 00:14:27.261 19:45:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:27.261 19:45:08 -- nvmf/common.sh@471 -- # waitforlisten 1694078 00:14:27.261 19:45:08 -- common/autotest_common.sh@817 -- # '[' -z 1694078 ']' 00:14:27.261 19:45:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.261 19:45:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:27.261 19:45:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.261 19:45:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:27.261 19:45:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.261 [2024-04-24 19:45:08.598252] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:14:27.261 [2024-04-24 19:45:08.598332] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.261 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.261 [2024-04-24 19:45:08.673423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.520 [2024-04-24 19:45:08.797562] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.520 [2024-04-24 19:45:08.797623] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.520 [2024-04-24 19:45:08.797650] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.520 [2024-04-24 19:45:08.797665] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.520 [2024-04-24 19:45:08.797677] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.520 [2024-04-24 19:45:08.797764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:27.520 [2024-04-24 19:45:08.797822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:27.520 [2024-04-24 19:45:08.797859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:27.520 [2024-04-24 19:45:08.797863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.520 19:45:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:27.520 19:45:08 -- common/autotest_common.sh@850 -- # return 0 00:14:27.520 19:45:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:27.520 19:45:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:27.520 19:45:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.520 19:45:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.520 19:45:08 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:27.520 19:45:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.520 19:45:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.520 [2024-04-24 19:45:08.959293] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.520 19:45:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.520 19:45:08 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:27.520 19:45:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.520 19:45:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.520 Malloc0 00:14:27.520 19:45:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.520 19:45:08 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:27.520 19:45:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.520 19:45:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.520 19:45:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.520 19:45:08 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:27.520 19:45:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.520 19:45:08 -- common/autotest_common.sh@10 -- # set +x 00:14:27.520 19:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.520 19:45:09 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.520 19:45:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.520 19:45:09 -- common/autotest_common.sh@10 -- # set +x 00:14:27.520 [2024-04-24 19:45:09.012010] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.520 19:45:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.520 19:45:09 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:27.520 19:45:09 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:27.520 19:45:09 -- nvmf/common.sh@521 -- # config=() 00:14:27.520 19:45:09 -- nvmf/common.sh@521 -- # local subsystem config 00:14:27.520 19:45:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:27.520 19:45:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:27.520 { 00:14:27.520 "params": { 00:14:27.520 "name": "Nvme$subsystem", 00:14:27.520 "trtype": "$TEST_TRANSPORT", 00:14:27.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:27.520 "adrfam": "ipv4", 00:14:27.520 "trsvcid": "$NVMF_PORT", 00:14:27.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:27.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:27.520 "hdgst": ${hdgst:-false}, 00:14:27.520 "ddgst": ${ddgst:-false} 00:14:27.520 }, 00:14:27.520 "method": "bdev_nvme_attach_controller" 00:14:27.520 } 00:14:27.520 EOF 00:14:27.520 )") 00:14:27.520 19:45:09 -- nvmf/common.sh@543 -- # cat 00:14:27.520 19:45:09 -- nvmf/common.sh@545 -- # jq . 00:14:27.520 19:45:09 -- nvmf/common.sh@546 -- # IFS=, 00:14:27.520 19:45:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:27.520 "params": { 00:14:27.520 "name": "Nvme1", 00:14:27.520 "trtype": "tcp", 00:14:27.520 "traddr": "10.0.0.2", 00:14:27.520 "adrfam": "ipv4", 00:14:27.520 "trsvcid": "4420", 00:14:27.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:27.520 "hdgst": false, 00:14:27.520 "ddgst": false 00:14:27.520 }, 00:14:27.520 "method": "bdev_nvme_attach_controller" 00:14:27.520 }' 00:14:27.778 [2024-04-24 19:45:09.057717] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:14:27.778 [2024-04-24 19:45:09.057794] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1694191 ] 00:14:27.778 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.778 [2024-04-24 19:45:09.121796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:27.778 [2024-04-24 19:45:09.234219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.778 [2024-04-24 19:45:09.234267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.778 [2024-04-24 19:45:09.234271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.035 I/O targets: 00:14:28.035 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:28.035 00:14:28.035 00:14:28.036 CUnit - A unit testing framework for C - Version 2.1-3 00:14:28.036 http://cunit.sourceforge.net/ 00:14:28.036 00:14:28.036 00:14:28.036 Suite: bdevio tests on: Nvme1n1 00:14:28.036 Test: blockdev write read block ...passed 00:14:28.036 Test: blockdev write zeroes read block ...passed 00:14:28.036 Test: blockdev write zeroes read no split ...passed 00:14:28.293 Test: blockdev write zeroes read split ...passed 00:14:28.293 Test: blockdev write zeroes read split partial ...passed 00:14:28.293 Test: blockdev reset ...[2024-04-24 19:45:09.625598] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:28.293 [2024-04-24 19:45:09.625720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef2f70 (9): Bad file descriptor 00:14:28.293 [2024-04-24 19:45:09.643999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:28.293 passed 00:14:28.293 Test: blockdev write read 8 blocks ...passed 00:14:28.293 Test: blockdev write read size > 128k ...passed 00:14:28.293 Test: blockdev write read invalid size ...passed 00:14:28.293 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:28.293 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:28.293 Test: blockdev write read max offset ...passed 00:14:28.293 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:28.293 Test: blockdev writev readv 8 blocks ...passed 00:14:28.293 Test: blockdev writev readv 30 x 1block ...passed 00:14:28.551 Test: blockdev writev readv block ...passed 00:14:28.551 Test: blockdev writev readv size > 128k ...passed 00:14:28.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:28.551 Test: blockdev comparev and writev ...[2024-04-24 19:45:09.821714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.551 [2024-04-24 19:45:09.821752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.821776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.552 [2024-04-24 19:45:09.821793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.822203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.552 [2024-04-24 19:45:09.822227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.822250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.552 [2024-04-24 19:45:09.822275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.822689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.552 [2024-04-24 19:45:09.822714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.822735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.552 [2024-04-24 19:45:09.822751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.823165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.552 [2024-04-24 19:45:09.823199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.823220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:28.552 [2024-04-24 19:45:09.823235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:28.552 passed 00:14:28.552 Test: blockdev nvme passthru rw ...passed 00:14:28.552 Test: blockdev nvme passthru vendor specific ...[2024-04-24 19:45:09.907001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.552 [2024-04-24 19:45:09.907028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.907241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.552 [2024-04-24 19:45:09.907265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.907471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.552 [2024-04-24 19:45:09.907494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:28.552 [2024-04-24 19:45:09.907701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:28.552 [2024-04-24 19:45:09.907725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:28.552 passed 00:14:28.552 Test: blockdev nvme admin passthru ...passed 00:14:28.552 Test: blockdev copy ...passed 00:14:28.552 00:14:28.552 Run Summary: Type Total Ran Passed Failed Inactive 00:14:28.552 suites 1 1 n/a 0 0 00:14:28.552 tests 23 23 23 0 0 00:14:28.552 asserts 152 152 152 0 n/a 00:14:28.552 00:14:28.552 Elapsed time = 1.108 seconds 00:14:28.812 19:45:10 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.812 19:45:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.812 19:45:10 -- common/autotest_common.sh@10 -- # set +x 00:14:28.812 19:45:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.812 19:45:10 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:28.812 19:45:10 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:28.812 19:45:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:28.812 19:45:10 -- nvmf/common.sh@117 -- # sync 00:14:28.812 19:45:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.812 19:45:10 -- nvmf/common.sh@120 -- # set +e 00:14:28.812 19:45:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.812 19:45:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.812 rmmod nvme_tcp 00:14:28.812 rmmod nvme_fabrics 00:14:28.812 rmmod nvme_keyring 00:14:28.812 19:45:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.812 19:45:10 -- nvmf/common.sh@124 -- # set -e 00:14:28.812 19:45:10 -- nvmf/common.sh@125 -- # return 0 00:14:28.812 19:45:10 -- nvmf/common.sh@478 -- # '[' -n 1694078 ']' 00:14:28.812 19:45:10 -- nvmf/common.sh@479 -- # killprocess 1694078 00:14:28.812 19:45:10 -- common/autotest_common.sh@936 -- # '[' -z 1694078 ']' 00:14:28.812 19:45:10 -- common/autotest_common.sh@940 -- # kill -0 1694078 00:14:28.812 19:45:10 -- common/autotest_common.sh@941 -- # uname 00:14:28.812 19:45:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:28.812 19:45:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1694078 00:14:28.812 19:45:10 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:14:28.813 19:45:10 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:14:28.813 19:45:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1694078' 00:14:28.813 killing process with pid 1694078 00:14:28.813 19:45:10 -- common/autotest_common.sh@955 -- # kill 1694078 00:14:28.813 19:45:10 -- common/autotest_common.sh@960 -- # wait 1694078 00:14:29.380 19:45:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:29.380 19:45:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:29.380 19:45:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:29.380 19:45:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.380 19:45:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.380 19:45:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.380 19:45:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.380 19:45:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.287 19:45:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.287 00:14:31.287 real 0m6.202s 00:14:31.287 user 0m9.683s 00:14:31.287 sys 0m2.090s 00:14:31.287 19:45:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.287 19:45:12 -- common/autotest_common.sh@10 -- # set +x 00:14:31.287 ************************************ 00:14:31.287 END TEST nvmf_bdevio 00:14:31.287 ************************************ 00:14:31.287 19:45:12 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:14:31.287 19:45:12 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:31.287 19:45:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:31.287 19:45:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.287 19:45:12 -- common/autotest_common.sh@10 -- # set +x 00:14:31.287 ************************************ 00:14:31.287 START TEST nvmf_bdevio_no_huge 00:14:31.287 ************************************ 00:14:31.287 19:45:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:31.546 * Looking for test storage... 00:14:31.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.546 19:45:12 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.546 19:45:12 -- nvmf/common.sh@7 -- # uname -s 00:14:31.546 19:45:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.546 19:45:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.546 19:45:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.546 19:45:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.546 19:45:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.546 19:45:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.546 19:45:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.546 19:45:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.546 19:45:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.546 19:45:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.546 19:45:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.546 19:45:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.546 19:45:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.546 19:45:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.546 19:45:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.546 19:45:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.546 19:45:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.546 19:45:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.546 19:45:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.546 19:45:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.546 19:45:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.546 19:45:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.546 19:45:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.546 19:45:12 -- paths/export.sh@5 -- # export PATH 00:14:31.546 19:45:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.546 19:45:12 -- nvmf/common.sh@47 -- # : 0 00:14:31.546 19:45:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.546 19:45:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.546 19:45:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.546 19:45:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.546 19:45:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.546 19:45:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.546 19:45:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.546 19:45:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.546 19:45:12 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.546 19:45:12 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.546 19:45:12 -- target/bdevio.sh@14 -- # nvmftestinit 00:14:31.546 19:45:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:31.546 19:45:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.546 19:45:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:31.546 19:45:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:31.546 19:45:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:31.546 19:45:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.546 19:45:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.546 19:45:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.546 19:45:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:31.546 19:45:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:31.546 19:45:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.546 19:45:12 -- common/autotest_common.sh@10 -- # set +x 00:14:33.449 19:45:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:33.449 19:45:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:33.449 19:45:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:33.449 19:45:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:33.449 19:45:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:33.449 19:45:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:33.449 19:45:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:33.449 19:45:14 -- nvmf/common.sh@295 -- # net_devs=() 00:14:33.449 19:45:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:33.449 19:45:14 -- nvmf/common.sh@296 -- # e810=() 00:14:33.449 19:45:14 -- nvmf/common.sh@296 -- # local -ga e810 00:14:33.449 19:45:14 -- nvmf/common.sh@297 -- # x722=() 00:14:33.449 19:45:14 -- nvmf/common.sh@297 -- # local -ga x722 00:14:33.449 19:45:14 -- nvmf/common.sh@298 -- # mlx=() 00:14:33.449 19:45:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:33.449 19:45:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.449 19:45:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:33.449 19:45:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:33.449 19:45:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:33.449 19:45:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.449 19:45:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:33.449 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:33.449 19:45:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.449 19:45:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:33.449 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:33.449 19:45:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:33.449 19:45:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.449 19:45:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.449 19:45:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:33.449 19:45:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.449 19:45:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:33.449 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:33.449 19:45:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.449 19:45:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.449 19:45:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.449 19:45:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:33.449 19:45:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.449 19:45:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:33.449 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:33.449 19:45:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.449 19:45:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:33.449 19:45:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:33.449 19:45:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:33.449 19:45:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.449 19:45:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.449 19:45:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.449 19:45:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:33.449 19:45:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.449 19:45:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.449 19:45:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:33.449 19:45:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.449 19:45:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.449 19:45:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:33.449 19:45:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:33.449 19:45:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.449 19:45:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.449 19:45:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.449 19:45:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.449 19:45:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:33.449 19:45:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.449 19:45:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.449 19:45:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.449 19:45:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:33.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:14:33.449 00:14:33.449 --- 10.0.0.2 ping statistics --- 00:14:33.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.449 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:14:33.449 19:45:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:14:33.449 00:14:33.449 --- 10.0.0.1 ping statistics --- 00:14:33.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.449 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:14:33.449 19:45:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.449 19:45:14 -- nvmf/common.sh@411 -- # return 0 00:14:33.449 19:45:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:33.449 19:45:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.449 19:45:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:33.449 19:45:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.449 19:45:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:33.449 19:45:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:33.449 19:45:14 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:33.449 19:45:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:33.449 19:45:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:33.449 19:45:14 -- common/autotest_common.sh@10 -- # set +x 00:14:33.449 19:45:14 -- nvmf/common.sh@470 -- # nvmfpid=1696306 00:14:33.449 19:45:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:33.449 19:45:14 -- nvmf/common.sh@471 -- # waitforlisten 1696306 00:14:33.449 19:45:14 -- common/autotest_common.sh@817 -- # '[' -z 1696306 ']' 00:14:33.449 19:45:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.449 19:45:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:33.449 19:45:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.449 19:45:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:33.449 19:45:14 -- common/autotest_common.sh@10 -- # set +x 00:14:33.708 [2024-04-24 19:45:15.000355] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:14:33.708 [2024-04-24 19:45:15.000436] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:33.708 [2024-04-24 19:45:15.073338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.708 [2024-04-24 19:45:15.179636] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.708 [2024-04-24 19:45:15.179705] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.708 [2024-04-24 19:45:15.179722] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.708 [2024-04-24 19:45:15.179734] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.708 [2024-04-24 19:45:15.179743] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.708 [2024-04-24 19:45:15.179830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:33.708 [2024-04-24 19:45:15.179904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:33.708 [2024-04-24 19:45:15.180026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:33.708 [2024-04-24 19:45:15.180029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.967 19:45:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:33.967 19:45:15 -- common/autotest_common.sh@850 -- # return 0 00:14:33.967 19:45:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:33.967 19:45:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:33.967 19:45:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.967 19:45:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.967 19:45:15 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.967 19:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.967 19:45:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.968 [2024-04-24 19:45:15.308401] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.968 19:45:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.968 19:45:15 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:33.968 19:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.968 19:45:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.968 Malloc0 00:14:33.968 19:45:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.968 19:45:15 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:33.968 19:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.968 19:45:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.968 19:45:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.968 19:45:15 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:33.968 19:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.968 19:45:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.968 19:45:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.968 19:45:15 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.968 19:45:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.968 19:45:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.968 [2024-04-24 19:45:15.346600] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.968 19:45:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.968 19:45:15 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:33.968 19:45:15 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:33.968 19:45:15 -- nvmf/common.sh@521 -- # config=() 00:14:33.968 19:45:15 -- nvmf/common.sh@521 -- # local subsystem config 00:14:33.968 19:45:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:33.968 19:45:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:33.968 { 00:14:33.968 "params": { 00:14:33.968 "name": "Nvme$subsystem", 00:14:33.968 "trtype": "$TEST_TRANSPORT", 00:14:33.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:33.968 "adrfam": "ipv4", 00:14:33.968 "trsvcid": "$NVMF_PORT", 00:14:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:33.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:33.968 "hdgst": ${hdgst:-false}, 00:14:33.968 "ddgst": ${ddgst:-false} 00:14:33.968 }, 00:14:33.968 "method": "bdev_nvme_attach_controller" 00:14:33.968 } 00:14:33.968 EOF 00:14:33.968 )") 00:14:33.968 19:45:15 -- nvmf/common.sh@543 -- # cat 00:14:33.968 19:45:15 -- nvmf/common.sh@545 -- # jq . 00:14:33.968 19:45:15 -- nvmf/common.sh@546 -- # IFS=, 00:14:33.968 19:45:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:33.968 "params": { 00:14:33.968 "name": "Nvme1", 00:14:33.968 "trtype": "tcp", 00:14:33.968 "traddr": "10.0.0.2", 00:14:33.968 "adrfam": "ipv4", 00:14:33.968 "trsvcid": "4420", 00:14:33.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:33.968 "hdgst": false, 00:14:33.968 "ddgst": false 00:14:33.968 }, 00:14:33.968 "method": "bdev_nvme_attach_controller" 00:14:33.968 }' 00:14:33.968 [2024-04-24 19:45:15.393877] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:14:33.968 [2024-04-24 19:45:15.393990] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1696446 ] 00:14:33.968 [2024-04-24 19:45:15.458512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:34.227 [2024-04-24 19:45:15.574218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.227 [2024-04-24 19:45:15.574270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.227 [2024-04-24 19:45:15.574273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.227 I/O targets: 00:14:34.227 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:34.227 00:14:34.227 00:14:34.227 CUnit - A unit testing framework for C - Version 2.1-3 00:14:34.227 http://cunit.sourceforge.net/ 00:14:34.227 00:14:34.227 00:14:34.227 Suite: bdevio tests on: Nvme1n1 00:14:34.484 Test: blockdev write read block ...passed 00:14:34.484 Test: blockdev write zeroes read block ...passed 00:14:34.484 Test: blockdev write zeroes read no split ...passed 00:14:34.484 Test: blockdev write zeroes read split ...passed 00:14:34.484 Test: blockdev write zeroes read split partial ...passed 00:14:34.484 Test: blockdev reset ...[2024-04-24 19:45:15.949125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:34.484 [2024-04-24 19:45:15.949239] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ae5c0 (9): Bad file descriptor 00:14:34.484 [2024-04-24 19:45:15.970920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:34.484 passed 00:14:34.484 Test: blockdev write read 8 blocks ...passed 00:14:34.484 Test: blockdev write read size > 128k ...passed 00:14:34.484 Test: blockdev write read invalid size ...passed 00:14:34.742 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:34.742 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:34.742 Test: blockdev write read max offset ...passed 00:14:34.742 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:34.742 Test: blockdev writev readv 8 blocks ...passed 00:14:34.742 Test: blockdev writev readv 30 x 1block ...passed 00:14:34.742 Test: blockdev writev readv block ...passed 00:14:34.742 Test: blockdev writev readv size > 128k ...passed 00:14:34.742 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:34.742 Test: blockdev comparev and writev ...[2024-04-24 19:45:16.228655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.742 [2024-04-24 19:45:16.228692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:34.742 [2024-04-24 19:45:16.228717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.742 [2024-04-24 19:45:16.228734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:34.742 [2024-04-24 19:45:16.229153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.742 [2024-04-24 19:45:16.229179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:34.742 [2024-04-24 19:45:16.229201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.742 [2024-04-24 19:45:16.229227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:34.742 [2024-04-24 19:45:16.229650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.742 [2024-04-24 19:45:16.229675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:34.742 [2024-04-24 19:45:16.229698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.742 [2024-04-24 19:45:16.229720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:34.742 [2024-04-24 19:45:16.230127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.742 [2024-04-24 19:45:16.230151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:34.742 [2024-04-24 19:45:16.230177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:34.742 [2024-04-24 19:45:16.230194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:35.000 passed 00:14:35.000 Test: blockdev nvme passthru rw ...passed 00:14:35.000 Test: blockdev nvme passthru vendor specific ...[2024-04-24 19:45:16.314023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:35.000 [2024-04-24 19:45:16.314051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:35.000 [2024-04-24 19:45:16.314265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:35.000 [2024-04-24 19:45:16.314289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:35.000 [2024-04-24 19:45:16.314490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:35.000 [2024-04-24 19:45:16.314513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:35.000 [2024-04-24 19:45:16.314738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:35.000 [2024-04-24 19:45:16.314762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:35.000 passed 00:14:35.000 Test: blockdev nvme admin passthru ...passed 00:14:35.000 Test: blockdev copy ...passed 00:14:35.000 00:14:35.000 Run Summary: Type Total Ran Passed Failed Inactive 00:14:35.000 suites 1 1 n/a 0 0 00:14:35.000 tests 23 23 23 0 0 00:14:35.000 asserts 152 152 152 0 n/a 00:14:35.000 00:14:35.000 Elapsed time = 1.279 seconds 00:14:35.258 19:45:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:35.258 19:45:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:35.258 19:45:16 -- common/autotest_common.sh@10 -- # set +x 00:14:35.258 19:45:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.258 19:45:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:35.258 19:45:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:14:35.258 19:45:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:35.258 19:45:16 -- nvmf/common.sh@117 -- # sync 00:14:35.258 19:45:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.258 19:45:16 -- nvmf/common.sh@120 -- # set +e 00:14:35.258 19:45:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.258 19:45:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.258 rmmod nvme_tcp 00:14:35.258 rmmod nvme_fabrics 00:14:35.523 rmmod nvme_keyring 00:14:35.523 19:45:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.523 19:45:16 -- nvmf/common.sh@124 -- # set -e 00:14:35.523 19:45:16 -- nvmf/common.sh@125 -- # return 0 00:14:35.523 19:45:16 -- nvmf/common.sh@478 -- # '[' -n 1696306 ']' 00:14:35.523 19:45:16 -- nvmf/common.sh@479 -- # killprocess 1696306 00:14:35.523 19:45:16 -- common/autotest_common.sh@936 -- # '[' -z 1696306 ']' 00:14:35.523 19:45:16 -- common/autotest_common.sh@940 -- # kill -0 1696306 00:14:35.523 19:45:16 -- common/autotest_common.sh@941 -- # uname 00:14:35.523 19:45:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.523 19:45:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1696306 00:14:35.523 19:45:16 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:14:35.523 19:45:16 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:14:35.523 19:45:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1696306' 00:14:35.523 killing process with pid 1696306 00:14:35.523 19:45:16 -- common/autotest_common.sh@955 -- # kill 1696306 00:14:35.523 19:45:16 -- common/autotest_common.sh@960 -- # wait 1696306 00:14:35.783 19:45:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:35.783 19:45:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:35.783 19:45:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:35.783 19:45:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.783 19:45:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.783 19:45:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.783 19:45:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.783 19:45:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.318 19:45:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.318 00:14:38.318 real 0m6.532s 00:14:38.318 user 0m10.727s 00:14:38.318 sys 0m2.501s 00:14:38.318 19:45:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:38.318 19:45:19 -- common/autotest_common.sh@10 -- # set +x 00:14:38.318 ************************************ 00:14:38.318 END TEST nvmf_bdevio_no_huge 00:14:38.318 ************************************ 00:14:38.318 19:45:19 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:38.318 19:45:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:38.318 19:45:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.318 19:45:19 -- common/autotest_common.sh@10 -- # set +x 00:14:38.318 ************************************ 00:14:38.318 START TEST nvmf_tls 00:14:38.318 ************************************ 00:14:38.318 19:45:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:38.318 * Looking for test storage... 00:14:38.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.318 19:45:19 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.318 19:45:19 -- nvmf/common.sh@7 -- # uname -s 00:14:38.318 19:45:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.318 19:45:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.318 19:45:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.318 19:45:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.318 19:45:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.318 19:45:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.318 19:45:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.318 19:45:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.318 19:45:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.318 19:45:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.318 19:45:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.318 19:45:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.318 19:45:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.318 19:45:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.318 19:45:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.318 19:45:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.318 19:45:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.318 19:45:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.318 19:45:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.318 19:45:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.318 19:45:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.318 19:45:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.318 19:45:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.318 19:45:19 -- paths/export.sh@5 -- # export PATH 00:14:38.318 19:45:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.318 19:45:19 -- nvmf/common.sh@47 -- # : 0 00:14:38.318 19:45:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.318 19:45:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.318 19:45:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.318 19:45:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.319 19:45:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.319 19:45:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.319 19:45:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.319 19:45:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.319 19:45:19 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.319 19:45:19 -- target/tls.sh@62 -- # nvmftestinit 00:14:38.319 19:45:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:38.319 19:45:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.319 19:45:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:38.319 19:45:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:38.319 19:45:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:38.319 19:45:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.319 19:45:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.319 19:45:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.319 19:45:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:38.319 19:45:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:38.319 19:45:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.319 19:45:19 -- common/autotest_common.sh@10 -- # set +x 00:14:40.227 19:45:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:40.227 19:45:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.227 19:45:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.227 19:45:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.227 19:45:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.227 19:45:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.227 19:45:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.227 19:45:21 -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.227 19:45:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.227 19:45:21 -- nvmf/common.sh@296 -- # e810=() 00:14:40.227 19:45:21 -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.227 19:45:21 -- nvmf/common.sh@297 -- # x722=() 00:14:40.227 19:45:21 -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.227 19:45:21 -- nvmf/common.sh@298 -- # mlx=() 00:14:40.227 19:45:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.227 19:45:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.227 19:45:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.227 19:45:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.227 19:45:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.227 19:45:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.227 19:45:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:40.227 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:40.227 19:45:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.227 19:45:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:40.227 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:40.227 19:45:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.227 19:45:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.227 19:45:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.227 19:45:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:40.227 19:45:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.227 19:45:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:40.227 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:40.227 19:45:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.227 19:45:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.227 19:45:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.227 19:45:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:40.227 19:45:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.227 19:45:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:40.227 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:40.227 19:45:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.227 19:45:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:40.227 19:45:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:40.227 19:45:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:40.227 19:45:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:40.228 19:45:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.228 19:45:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.228 19:45:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:40.228 19:45:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:40.228 19:45:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:40.228 19:45:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:40.228 19:45:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:40.228 19:45:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:40.228 19:45:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.228 19:45:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:40.228 19:45:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:40.228 19:45:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:40.228 19:45:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:40.228 19:45:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:40.228 19:45:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:40.228 19:45:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:40.228 19:45:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:40.228 19:45:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:40.228 19:45:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:40.228 19:45:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:40.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:14:40.228 00:14:40.228 --- 10.0.0.2 ping statistics --- 00:14:40.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.228 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:40.228 19:45:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:40.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:14:40.228 00:14:40.228 --- 10.0.0.1 ping statistics --- 00:14:40.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.228 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:40.228 19:45:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.228 19:45:21 -- nvmf/common.sh@411 -- # return 0 00:14:40.228 19:45:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:40.228 19:45:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.228 19:45:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:40.228 19:45:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:40.228 19:45:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.228 19:45:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:40.228 19:45:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:40.228 19:45:21 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:40.228 19:45:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:40.228 19:45:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:40.228 19:45:21 -- common/autotest_common.sh@10 -- # set +x 00:14:40.228 19:45:21 -- nvmf/common.sh@470 -- # nvmfpid=1698528 00:14:40.228 19:45:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:40.228 19:45:21 -- nvmf/common.sh@471 -- # waitforlisten 1698528 00:14:40.228 19:45:21 -- common/autotest_common.sh@817 -- # '[' -z 1698528 ']' 00:14:40.228 19:45:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.228 19:45:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:40.228 19:45:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.228 19:45:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:40.228 19:45:21 -- common/autotest_common.sh@10 -- # set +x 00:14:40.228 [2024-04-24 19:45:21.735984] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:14:40.228 [2024-04-24 19:45:21.736062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.486 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.486 [2024-04-24 19:45:21.802135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.486 [2024-04-24 19:45:21.910642] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.486 [2024-04-24 19:45:21.910703] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.486 [2024-04-24 19:45:21.910716] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.486 [2024-04-24 19:45:21.910729] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.486 [2024-04-24 19:45:21.910740] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.486 [2024-04-24 19:45:21.910775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.486 19:45:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:40.486 19:45:21 -- common/autotest_common.sh@850 -- # return 0 00:14:40.486 19:45:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:40.486 19:45:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:40.486 19:45:21 -- common/autotest_common.sh@10 -- # set +x 00:14:40.486 19:45:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.486 19:45:21 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:40.486 19:45:21 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:40.743 true 00:14:40.743 19:45:22 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:40.743 19:45:22 -- target/tls.sh@73 -- # jq -r .tls_version 00:14:41.002 19:45:22 -- target/tls.sh@73 -- # version=0 00:14:41.002 19:45:22 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:41.002 19:45:22 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:41.262 19:45:22 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:41.262 19:45:22 -- target/tls.sh@81 -- # jq -r .tls_version 00:14:41.520 19:45:22 -- target/tls.sh@81 -- # version=13 00:14:41.520 19:45:22 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:41.520 19:45:22 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:41.780 19:45:23 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:41.780 19:45:23 -- target/tls.sh@89 -- # jq -r .tls_version 00:14:42.038 19:45:23 -- target/tls.sh@89 -- # version=7 00:14:42.038 19:45:23 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:42.038 19:45:23 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:42.038 19:45:23 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:42.305 19:45:23 -- target/tls.sh@96 -- # ktls=false 00:14:42.305 19:45:23 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:42.305 19:45:23 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:42.571 19:45:23 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:42.571 19:45:23 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:42.830 19:45:24 -- target/tls.sh@104 -- # ktls=true 00:14:42.830 19:45:24 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:42.830 19:45:24 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:43.089 19:45:24 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:43.089 19:45:24 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:43.352 19:45:24 -- target/tls.sh@112 -- # ktls=false 00:14:43.352 19:45:24 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:43.352 19:45:24 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:43.352 19:45:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:43.352 19:45:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:43.352 19:45:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:43.352 19:45:24 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:14:43.352 19:45:24 -- nvmf/common.sh@693 -- # digest=1 00:14:43.352 19:45:24 -- nvmf/common.sh@694 -- # python - 00:14:43.352 19:45:24 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:43.352 19:45:24 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:43.352 19:45:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:43.352 19:45:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:43.352 19:45:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:43.352 19:45:24 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:14:43.352 19:45:24 -- nvmf/common.sh@693 -- # digest=1 00:14:43.352 19:45:24 -- nvmf/common.sh@694 -- # python - 00:14:43.352 19:45:24 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:43.352 19:45:24 -- target/tls.sh@121 -- # mktemp 00:14:43.352 19:45:24 -- target/tls.sh@121 -- # key_path=/tmp/tmp.cHuUKy4YaI 00:14:43.352 19:45:24 -- target/tls.sh@122 -- # mktemp 00:14:43.352 19:45:24 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.5GC57Ucw0e 00:14:43.352 19:45:24 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:43.352 19:45:24 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:43.352 19:45:24 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.cHuUKy4YaI 00:14:43.352 19:45:24 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5GC57Ucw0e 00:14:43.352 19:45:24 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:43.610 19:45:24 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:14:43.868 19:45:25 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.cHuUKy4YaI 00:14:43.868 19:45:25 -- target/tls.sh@49 -- # local key=/tmp/tmp.cHuUKy4YaI 00:14:43.868 19:45:25 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:44.126 [2024-04-24 19:45:25.543314] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.126 19:45:25 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:44.385 19:45:25 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:44.642 [2024-04-24 19:45:26.008546] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:44.642 [2024-04-24 19:45:26.008816] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.642 19:45:26 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:44.910 malloc0 00:14:44.910 19:45:26 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:45.172 19:45:26 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cHuUKy4YaI 00:14:45.431 [2024-04-24 19:45:26.799173] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:45.431 19:45:26 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.cHuUKy4YaI 00:14:45.431 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.407 Initializing NVMe Controllers 00:14:55.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:55.407 Initialization complete. Launching workers. 00:14:55.407 ======================================================== 00:14:55.407 Latency(us) 00:14:55.407 Device Information : IOPS MiB/s Average min max 00:14:55.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7657.37 29.91 8360.79 1331.40 10124.76 00:14:55.407 ======================================================== 00:14:55.407 Total : 7657.37 29.91 8360.79 1331.40 10124.76 00:14:55.407 00:14:55.407 19:45:36 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cHuUKy4YaI 00:14:55.407 19:45:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:55.407 19:45:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:55.407 19:45:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:55.407 19:45:36 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cHuUKy4YaI' 00:14:55.407 19:45:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.407 19:45:36 -- target/tls.sh@28 -- # bdevperf_pid=1700418 00:14:55.407 19:45:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:55.407 19:45:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.407 19:45:36 -- target/tls.sh@31 -- # waitforlisten 1700418 /var/tmp/bdevperf.sock 00:14:55.407 19:45:36 -- common/autotest_common.sh@817 -- # '[' -z 1700418 ']' 00:14:55.407 19:45:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.407 19:45:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:55.407 19:45:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.407 19:45:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:55.407 19:45:36 -- common/autotest_common.sh@10 -- # set +x 00:14:55.665 [2024-04-24 19:45:36.959225] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:14:55.665 [2024-04-24 19:45:36.959295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700418 ] 00:14:55.665 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.665 [2024-04-24 19:45:37.015949] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.665 [2024-04-24 19:45:37.121575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.922 19:45:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:55.922 19:45:37 -- common/autotest_common.sh@850 -- # return 0 00:14:55.922 19:45:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cHuUKy4YaI 00:14:56.195 [2024-04-24 19:45:37.454335] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:56.195 [2024-04-24 19:45:37.454440] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:56.196 TLSTESTn1 00:14:56.196 19:45:37 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:56.196 Running I/O for 10 seconds... 00:15:08.421 00:15:08.421 Latency(us) 00:15:08.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.421 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:08.422 Verification LBA range: start 0x0 length 0x2000 00:15:08.422 TLSTESTn1 : 10.07 1618.15 6.32 0.00 0.00 78855.34 8738.13 112624.83 00:15:08.422 =================================================================================================================== 00:15:08.422 Total : 1618.15 6.32 0.00 0.00 78855.34 8738.13 112624.83 00:15:08.422 0 00:15:08.422 19:45:47 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.422 19:45:47 -- target/tls.sh@45 -- # killprocess 1700418 00:15:08.422 19:45:47 -- common/autotest_common.sh@936 -- # '[' -z 1700418 ']' 00:15:08.422 19:45:47 -- common/autotest_common.sh@940 -- # kill -0 1700418 00:15:08.422 19:45:47 -- common/autotest_common.sh@941 -- # uname 00:15:08.422 19:45:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.422 19:45:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1700418 00:15:08.422 19:45:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:08.422 19:45:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:08.422 19:45:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1700418' 00:15:08.422 killing process with pid 1700418 00:15:08.422 19:45:47 -- common/autotest_common.sh@955 -- # kill 1700418 00:15:08.422 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.422 00:15:08.422 Latency(us) 00:15:08.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.422 =================================================================================================================== 00:15:08.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:08.422 [2024-04-24 19:45:47.782320] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:08.422 19:45:47 -- common/autotest_common.sh@960 -- # wait 1700418 00:15:08.422 19:45:48 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5GC57Ucw0e 00:15:08.422 19:45:48 -- common/autotest_common.sh@638 -- # local es=0 00:15:08.422 19:45:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5GC57Ucw0e 00:15:08.422 19:45:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:08.422 19:45:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:08.422 19:45:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:08.422 19:45:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:08.422 19:45:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5GC57Ucw0e 00:15:08.422 19:45:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:08.422 19:45:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:08.422 19:45:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:08.422 19:45:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5GC57Ucw0e' 00:15:08.422 19:45:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.422 19:45:48 -- target/tls.sh@28 -- # bdevperf_pid=1701624 00:15:08.422 19:45:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:08.422 19:45:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:08.422 19:45:48 -- target/tls.sh@31 -- # waitforlisten 1701624 /var/tmp/bdevperf.sock 00:15:08.422 19:45:48 -- common/autotest_common.sh@817 -- # '[' -z 1701624 ']' 00:15:08.422 19:45:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.422 19:45:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.422 19:45:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.422 19:45:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.422 19:45:48 -- common/autotest_common.sh@10 -- # set +x 00:15:08.422 [2024-04-24 19:45:48.085856] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:08.422 [2024-04-24 19:45:48.085961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701624 ] 00:15:08.422 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.422 [2024-04-24 19:45:48.147529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.422 [2024-04-24 19:45:48.256439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.422 19:45:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:08.422 19:45:48 -- common/autotest_common.sh@850 -- # return 0 00:15:08.422 19:45:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5GC57Ucw0e 00:15:08.422 [2024-04-24 19:45:48.571436] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.422 [2024-04-24 19:45:48.571571] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:08.422 [2024-04-24 19:45:48.580774] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:08.422 [2024-04-24 19:45:48.581528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14de230 (107): Transport endpoint is not connected 00:15:08.422 [2024-04-24 19:45:48.582520] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14de230 (9): Bad file descriptor 00:15:08.422 [2024-04-24 19:45:48.583519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:08.422 [2024-04-24 19:45:48.583540] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:08.422 [2024-04-24 19:45:48.583553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:08.422 request: 00:15:08.422 { 00:15:08.422 "name": "TLSTEST", 00:15:08.422 "trtype": "tcp", 00:15:08.422 "traddr": "10.0.0.2", 00:15:08.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.422 "adrfam": "ipv4", 00:15:08.422 "trsvcid": "4420", 00:15:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.422 "psk": "/tmp/tmp.5GC57Ucw0e", 00:15:08.422 "method": "bdev_nvme_attach_controller", 00:15:08.422 "req_id": 1 00:15:08.422 } 00:15:08.422 Got JSON-RPC error response 00:15:08.422 response: 00:15:08.422 { 00:15:08.422 "code": -32602, 00:15:08.422 "message": "Invalid parameters" 00:15:08.422 } 00:15:08.422 19:45:48 -- target/tls.sh@36 -- # killprocess 1701624 00:15:08.422 19:45:48 -- common/autotest_common.sh@936 -- # '[' -z 1701624 ']' 00:15:08.422 19:45:48 -- common/autotest_common.sh@940 -- # kill -0 1701624 00:15:08.422 19:45:48 -- common/autotest_common.sh@941 -- # uname 00:15:08.422 19:45:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.422 19:45:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1701624 00:15:08.422 19:45:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:08.422 19:45:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:08.422 19:45:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1701624' 00:15:08.422 killing process with pid 1701624 00:15:08.422 19:45:48 -- common/autotest_common.sh@955 -- # kill 1701624 00:15:08.422 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.422 00:15:08.422 Latency(us) 00:15:08.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.422 =================================================================================================================== 00:15:08.422 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:08.422 [2024-04-24 19:45:48.636123] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:08.422 19:45:48 -- common/autotest_common.sh@960 -- # wait 1701624 00:15:08.422 19:45:48 -- target/tls.sh@37 -- # return 1 00:15:08.422 19:45:48 -- common/autotest_common.sh@641 -- # es=1 00:15:08.422 19:45:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:08.422 19:45:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:08.422 19:45:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:08.422 19:45:48 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cHuUKy4YaI 00:15:08.422 19:45:48 -- common/autotest_common.sh@638 -- # local es=0 00:15:08.422 19:45:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cHuUKy4YaI 00:15:08.422 19:45:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:08.422 19:45:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:08.422 19:45:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:08.422 19:45:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:08.422 19:45:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cHuUKy4YaI 00:15:08.422 19:45:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:08.422 19:45:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:08.422 19:45:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:08.422 19:45:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cHuUKy4YaI' 00:15:08.422 19:45:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.422 19:45:48 -- target/tls.sh@28 -- # bdevperf_pid=1701761 00:15:08.422 19:45:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:08.422 19:45:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:08.422 19:45:48 -- target/tls.sh@31 -- # waitforlisten 1701761 /var/tmp/bdevperf.sock 00:15:08.422 19:45:48 -- common/autotest_common.sh@817 -- # '[' -z 1701761 ']' 00:15:08.422 19:45:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.422 19:45:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.422 19:45:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.422 19:45:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.422 19:45:48 -- common/autotest_common.sh@10 -- # set +x 00:15:08.422 [2024-04-24 19:45:48.933878] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:08.422 [2024-04-24 19:45:48.933992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701761 ] 00:15:08.422 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.422 [2024-04-24 19:45:48.994591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.422 [2024-04-24 19:45:49.106448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.422 19:45:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:08.422 19:45:49 -- common/autotest_common.sh@850 -- # return 0 00:15:08.422 19:45:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.cHuUKy4YaI 00:15:08.422 [2024-04-24 19:45:49.447689] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.422 [2024-04-24 19:45:49.447835] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:08.422 [2024-04-24 19:45:49.453336] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:08.422 [2024-04-24 19:45:49.453369] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:08.422 [2024-04-24 19:45:49.453410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:08.422 [2024-04-24 19:45:49.453884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb7230 (107): Transport endpoint is not connected 00:15:08.422 [2024-04-24 19:45:49.454871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb7230 (9): Bad file descriptor 00:15:08.422 [2024-04-24 19:45:49.455870] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:08.422 [2024-04-24 19:45:49.455893] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:08.422 [2024-04-24 19:45:49.455908] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:08.422 request: 00:15:08.422 { 00:15:08.422 "name": "TLSTEST", 00:15:08.422 "trtype": "tcp", 00:15:08.422 "traddr": "10.0.0.2", 00:15:08.422 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:08.422 "adrfam": "ipv4", 00:15:08.422 "trsvcid": "4420", 00:15:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.422 "psk": "/tmp/tmp.cHuUKy4YaI", 00:15:08.422 "method": "bdev_nvme_attach_controller", 00:15:08.422 "req_id": 1 00:15:08.422 } 00:15:08.422 Got JSON-RPC error response 00:15:08.422 response: 00:15:08.422 { 00:15:08.422 "code": -32602, 00:15:08.422 "message": "Invalid parameters" 00:15:08.422 } 00:15:08.422 19:45:49 -- target/tls.sh@36 -- # killprocess 1701761 00:15:08.422 19:45:49 -- common/autotest_common.sh@936 -- # '[' -z 1701761 ']' 00:15:08.422 19:45:49 -- common/autotest_common.sh@940 -- # kill -0 1701761 00:15:08.422 19:45:49 -- common/autotest_common.sh@941 -- # uname 00:15:08.422 19:45:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.422 19:45:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1701761 00:15:08.422 19:45:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:08.422 19:45:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:08.422 19:45:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1701761' 00:15:08.422 killing process with pid 1701761 00:15:08.422 19:45:49 -- common/autotest_common.sh@955 -- # kill 1701761 00:15:08.422 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.422 00:15:08.422 Latency(us) 00:15:08.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.422 =================================================================================================================== 00:15:08.422 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:08.422 [2024-04-24 19:45:49.508809] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:08.422 19:45:49 -- common/autotest_common.sh@960 -- # wait 1701761 00:15:08.422 19:45:49 -- target/tls.sh@37 -- # return 1 00:15:08.422 19:45:49 -- common/autotest_common.sh@641 -- # es=1 00:15:08.422 19:45:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:08.422 19:45:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:08.422 19:45:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:08.422 19:45:49 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cHuUKy4YaI 00:15:08.422 19:45:49 -- common/autotest_common.sh@638 -- # local es=0 00:15:08.423 19:45:49 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cHuUKy4YaI 00:15:08.423 19:45:49 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:08.423 19:45:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:08.423 19:45:49 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:08.423 19:45:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:08.423 19:45:49 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cHuUKy4YaI 00:15:08.423 19:45:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:08.423 19:45:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:08.423 19:45:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:08.423 19:45:49 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cHuUKy4YaI' 00:15:08.423 19:45:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.423 19:45:49 -- target/tls.sh@28 -- # bdevperf_pid=1701898 00:15:08.423 19:45:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:08.423 19:45:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:08.423 19:45:49 -- target/tls.sh@31 -- # waitforlisten 1701898 /var/tmp/bdevperf.sock 00:15:08.423 19:45:49 -- common/autotest_common.sh@817 -- # '[' -z 1701898 ']' 00:15:08.423 19:45:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.423 19:45:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.423 19:45:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.423 19:45:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.423 19:45:49 -- common/autotest_common.sh@10 -- # set +x 00:15:08.423 [2024-04-24 19:45:49.814341] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:08.423 [2024-04-24 19:45:49.814425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701898 ] 00:15:08.423 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.423 [2024-04-24 19:45:49.873328] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.685 [2024-04-24 19:45:49.980168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.685 19:45:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:08.685 19:45:50 -- common/autotest_common.sh@850 -- # return 0 00:15:08.685 19:45:50 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cHuUKy4YaI 00:15:08.943 [2024-04-24 19:45:50.326465] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.943 [2024-04-24 19:45:50.326573] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:08.943 [2024-04-24 19:45:50.335270] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:08.943 [2024-04-24 19:45:50.335301] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:08.944 [2024-04-24 19:45:50.335339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:08.944 [2024-04-24 19:45:50.336458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc80230 (107): Transport endpoint is not connected 00:15:08.944 [2024-04-24 19:45:50.337449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc80230 (9): Bad file descriptor 00:15:08.944 [2024-04-24 19:45:50.338449] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:08.944 [2024-04-24 19:45:50.338469] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:08.944 [2024-04-24 19:45:50.338482] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:08.944 request: 00:15:08.944 { 00:15:08.944 "name": "TLSTEST", 00:15:08.944 "trtype": "tcp", 00:15:08.944 "traddr": "10.0.0.2", 00:15:08.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.944 "adrfam": "ipv4", 00:15:08.944 "trsvcid": "4420", 00:15:08.944 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:08.944 "psk": "/tmp/tmp.cHuUKy4YaI", 00:15:08.944 "method": "bdev_nvme_attach_controller", 00:15:08.944 "req_id": 1 00:15:08.944 } 00:15:08.944 Got JSON-RPC error response 00:15:08.944 response: 00:15:08.944 { 00:15:08.944 "code": -32602, 00:15:08.944 "message": "Invalid parameters" 00:15:08.944 } 00:15:08.944 19:45:50 -- target/tls.sh@36 -- # killprocess 1701898 00:15:08.944 19:45:50 -- common/autotest_common.sh@936 -- # '[' -z 1701898 ']' 00:15:08.944 19:45:50 -- common/autotest_common.sh@940 -- # kill -0 1701898 00:15:08.944 19:45:50 -- common/autotest_common.sh@941 -- # uname 00:15:08.944 19:45:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.944 19:45:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1701898 00:15:08.944 19:45:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:08.944 19:45:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:08.944 19:45:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1701898' 00:15:08.944 killing process with pid 1701898 00:15:08.944 19:45:50 -- common/autotest_common.sh@955 -- # kill 1701898 00:15:08.944 Received shutdown signal, test time was about 10.000000 seconds 00:15:08.944 00:15:08.944 Latency(us) 00:15:08.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.944 =================================================================================================================== 00:15:08.944 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:08.944 [2024-04-24 19:45:50.388300] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:08.944 19:45:50 -- common/autotest_common.sh@960 -- # wait 1701898 00:15:09.202 19:45:50 -- target/tls.sh@37 -- # return 1 00:15:09.202 19:45:50 -- common/autotest_common.sh@641 -- # es=1 00:15:09.202 19:45:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:09.202 19:45:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:09.202 19:45:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:09.202 19:45:50 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:09.202 19:45:50 -- common/autotest_common.sh@638 -- # local es=0 00:15:09.202 19:45:50 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:09.202 19:45:50 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:09.202 19:45:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:09.202 19:45:50 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:09.202 19:45:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:09.202 19:45:50 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:09.202 19:45:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:09.202 19:45:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:09.202 19:45:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:09.202 19:45:50 -- target/tls.sh@23 -- # psk= 00:15:09.202 19:45:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:09.202 19:45:50 -- target/tls.sh@28 -- # bdevperf_pid=1702035 00:15:09.202 19:45:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:09.202 19:45:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:09.202 19:45:50 -- target/tls.sh@31 -- # waitforlisten 1702035 /var/tmp/bdevperf.sock 00:15:09.202 19:45:50 -- common/autotest_common.sh@817 -- # '[' -z 1702035 ']' 00:15:09.202 19:45:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.202 19:45:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:09.202 19:45:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.202 19:45:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:09.202 19:45:50 -- common/autotest_common.sh@10 -- # set +x 00:15:09.202 [2024-04-24 19:45:50.688903] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:09.202 [2024-04-24 19:45:50.688997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702035 ] 00:15:09.202 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.460 [2024-04-24 19:45:50.747241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.460 [2024-04-24 19:45:50.848200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.460 19:45:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:09.460 19:45:50 -- common/autotest_common.sh@850 -- # return 0 00:15:09.460 19:45:50 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:09.719 [2024-04-24 19:45:51.217579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:09.719 [2024-04-24 19:45:51.219294] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1024bb0 (9): Bad file descriptor 00:15:09.719 [2024-04-24 19:45:51.220292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:09.719 [2024-04-24 19:45:51.220314] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:09.719 [2024-04-24 19:45:51.220328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:09.719 request: 00:15:09.719 { 00:15:09.719 "name": "TLSTEST", 00:15:09.719 "trtype": "tcp", 00:15:09.719 "traddr": "10.0.0.2", 00:15:09.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.719 "adrfam": "ipv4", 00:15:09.719 "trsvcid": "4420", 00:15:09.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.719 "method": "bdev_nvme_attach_controller", 00:15:09.719 "req_id": 1 00:15:09.719 } 00:15:09.719 Got JSON-RPC error response 00:15:09.719 response: 00:15:09.719 { 00:15:09.719 "code": -32602, 00:15:09.719 "message": "Invalid parameters" 00:15:09.719 } 00:15:09.977 19:45:51 -- target/tls.sh@36 -- # killprocess 1702035 00:15:09.977 19:45:51 -- common/autotest_common.sh@936 -- # '[' -z 1702035 ']' 00:15:09.977 19:45:51 -- common/autotest_common.sh@940 -- # kill -0 1702035 00:15:09.977 19:45:51 -- common/autotest_common.sh@941 -- # uname 00:15:09.977 19:45:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.977 19:45:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1702035 00:15:09.977 19:45:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:09.977 19:45:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:09.977 19:45:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1702035' 00:15:09.977 killing process with pid 1702035 00:15:09.977 19:45:51 -- common/autotest_common.sh@955 -- # kill 1702035 00:15:09.977 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.977 00:15:09.977 Latency(us) 00:15:09.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.977 =================================================================================================================== 00:15:09.977 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:09.977 19:45:51 -- common/autotest_common.sh@960 -- # wait 1702035 00:15:10.236 19:45:51 -- target/tls.sh@37 -- # return 1 00:15:10.236 19:45:51 -- common/autotest_common.sh@641 -- # es=1 00:15:10.236 19:45:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:10.236 19:45:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:10.236 19:45:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:10.236 19:45:51 -- target/tls.sh@158 -- # killprocess 1698528 00:15:10.236 19:45:51 -- common/autotest_common.sh@936 -- # '[' -z 1698528 ']' 00:15:10.236 19:45:51 -- common/autotest_common.sh@940 -- # kill -0 1698528 00:15:10.236 19:45:51 -- common/autotest_common.sh@941 -- # uname 00:15:10.236 19:45:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:10.236 19:45:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1698528 00:15:10.237 19:45:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:10.237 19:45:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:10.237 19:45:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1698528' 00:15:10.237 killing process with pid 1698528 00:15:10.237 19:45:51 -- common/autotest_common.sh@955 -- # kill 1698528 00:15:10.237 [2024-04-24 19:45:51.553457] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:10.237 19:45:51 -- common/autotest_common.sh@960 -- # wait 1698528 00:15:10.511 19:45:51 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:10.511 19:45:51 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:10.511 19:45:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:15:10.511 19:45:51 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:15:10.511 19:45:51 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:10.511 19:45:51 -- nvmf/common.sh@693 -- # digest=2 00:15:10.511 19:45:51 -- nvmf/common.sh@694 -- # python - 00:15:10.511 19:45:51 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:10.511 19:45:51 -- target/tls.sh@160 -- # mktemp 00:15:10.511 19:45:51 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.R3XR65JnTD 00:15:10.511 19:45:51 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:10.511 19:45:51 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.R3XR65JnTD 00:15:10.511 19:45:51 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:10.511 19:45:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:10.511 19:45:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:10.511 19:45:51 -- common/autotest_common.sh@10 -- # set +x 00:15:10.511 19:45:51 -- nvmf/common.sh@470 -- # nvmfpid=1702187 00:15:10.511 19:45:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:10.511 19:45:51 -- nvmf/common.sh@471 -- # waitforlisten 1702187 00:15:10.511 19:45:51 -- common/autotest_common.sh@817 -- # '[' -z 1702187 ']' 00:15:10.511 19:45:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.511 19:45:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:10.511 19:45:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.511 19:45:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:10.511 19:45:51 -- common/autotest_common.sh@10 -- # set +x 00:15:10.511 [2024-04-24 19:45:51.939228] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:10.511 [2024-04-24 19:45:51.939341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.511 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.511 [2024-04-24 19:45:52.009439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.782 [2024-04-24 19:45:52.122880] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.782 [2024-04-24 19:45:52.122962] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.782 [2024-04-24 19:45:52.122978] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.782 [2024-04-24 19:45:52.122991] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.782 [2024-04-24 19:45:52.123004] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.782 [2024-04-24 19:45:52.123039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.719 19:45:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:11.719 19:45:52 -- common/autotest_common.sh@850 -- # return 0 00:15:11.719 19:45:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:11.719 19:45:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:11.719 19:45:52 -- common/autotest_common.sh@10 -- # set +x 00:15:11.719 19:45:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.719 19:45:52 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.R3XR65JnTD 00:15:11.719 19:45:52 -- target/tls.sh@49 -- # local key=/tmp/tmp.R3XR65JnTD 00:15:11.719 19:45:52 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:11.719 [2024-04-24 19:45:53.147908] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.719 19:45:53 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:11.977 19:45:53 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:12.235 [2024-04-24 19:45:53.633235] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:12.235 [2024-04-24 19:45:53.633484] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.235 19:45:53 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:12.493 malloc0 00:15:12.493 19:45:53 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:12.751 19:45:54 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R3XR65JnTD 00:15:13.010 [2024-04-24 19:45:54.427158] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:13.010 19:45:54 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.R3XR65JnTD 00:15:13.010 19:45:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:13.010 19:45:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:13.010 19:45:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:13.010 19:45:54 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.R3XR65JnTD' 00:15:13.010 19:45:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.010 19:45:54 -- target/tls.sh@28 -- # bdevperf_pid=1702484 00:15:13.010 19:45:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:13.010 19:45:54 -- target/tls.sh@31 -- # waitforlisten 1702484 /var/tmp/bdevperf.sock 00:15:13.010 19:45:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.010 19:45:54 -- common/autotest_common.sh@817 -- # '[' -z 1702484 ']' 00:15:13.010 19:45:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.010 19:45:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.010 19:45:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.010 19:45:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.010 19:45:54 -- common/autotest_common.sh@10 -- # set +x 00:15:13.010 [2024-04-24 19:45:54.487149] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:13.010 [2024-04-24 19:45:54.487243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702484 ] 00:15:13.010 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.268 [2024-04-24 19:45:54.546474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.268 [2024-04-24 19:45:54.654053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.268 19:45:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:13.268 19:45:54 -- common/autotest_common.sh@850 -- # return 0 00:15:13.268 19:45:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R3XR65JnTD 00:15:13.528 [2024-04-24 19:45:54.990448] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.528 [2024-04-24 19:45:54.990580] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:13.788 TLSTESTn1 00:15:13.788 19:45:55 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:13.788 Running I/O for 10 seconds... 00:15:26.000 00:15:26.000 Latency(us) 00:15:26.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.000 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:26.000 Verification LBA range: start 0x0 length 0x2000 00:15:26.000 TLSTESTn1 : 10.07 1537.91 6.01 0.00 0.00 82969.64 5825.42 112624.83 00:15:26.000 =================================================================================================================== 00:15:26.000 Total : 1537.91 6.01 0.00 0.00 82969.64 5825.42 112624.83 00:15:26.000 0 00:15:26.000 19:46:05 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:26.000 19:46:05 -- target/tls.sh@45 -- # killprocess 1702484 00:15:26.000 19:46:05 -- common/autotest_common.sh@936 -- # '[' -z 1702484 ']' 00:15:26.000 19:46:05 -- common/autotest_common.sh@940 -- # kill -0 1702484 00:15:26.000 19:46:05 -- common/autotest_common.sh@941 -- # uname 00:15:26.000 19:46:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.000 19:46:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1702484 00:15:26.000 19:46:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:26.000 19:46:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:26.000 19:46:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1702484' 00:15:26.000 killing process with pid 1702484 00:15:26.000 19:46:05 -- common/autotest_common.sh@955 -- # kill 1702484 00:15:26.000 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.000 00:15:26.000 Latency(us) 00:15:26.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.000 =================================================================================================================== 00:15:26.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.000 [2024-04-24 19:46:05.337232] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:26.000 19:46:05 -- common/autotest_common.sh@960 -- # wait 1702484 00:15:26.000 19:46:05 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.R3XR65JnTD 00:15:26.000 19:46:05 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.R3XR65JnTD 00:15:26.000 19:46:05 -- common/autotest_common.sh@638 -- # local es=0 00:15:26.000 19:46:05 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.R3XR65JnTD 00:15:26.000 19:46:05 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:15:26.000 19:46:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:26.000 19:46:05 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:15:26.000 19:46:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:26.000 19:46:05 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.R3XR65JnTD 00:15:26.000 19:46:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:26.000 19:46:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:26.000 19:46:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:26.000 19:46:05 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.R3XR65JnTD' 00:15:26.000 19:46:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:26.000 19:46:05 -- target/tls.sh@28 -- # bdevperf_pid=1703802 00:15:26.000 19:46:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:26.000 19:46:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:26.000 19:46:05 -- target/tls.sh@31 -- # waitforlisten 1703802 /var/tmp/bdevperf.sock 00:15:26.000 19:46:05 -- common/autotest_common.sh@817 -- # '[' -z 1703802 ']' 00:15:26.000 19:46:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.000 19:46:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:26.000 19:46:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.000 19:46:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:26.000 19:46:05 -- common/autotest_common.sh@10 -- # set +x 00:15:26.000 [2024-04-24 19:46:05.655497] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:26.000 [2024-04-24 19:46:05.655590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1703802 ] 00:15:26.000 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.001 [2024-04-24 19:46:05.714157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.001 [2024-04-24 19:46:05.820861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.001 19:46:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:26.001 19:46:05 -- common/autotest_common.sh@850 -- # return 0 00:15:26.001 19:46:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R3XR65JnTD 00:15:26.001 [2024-04-24 19:46:06.204207] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:26.001 [2024-04-24 19:46:06.204287] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:26.001 [2024-04-24 19:46:06.204306] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.R3XR65JnTD 00:15:26.001 request: 00:15:26.001 { 00:15:26.001 "name": "TLSTEST", 00:15:26.001 "trtype": "tcp", 00:15:26.001 "traddr": "10.0.0.2", 00:15:26.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:26.001 "adrfam": "ipv4", 00:15:26.001 "trsvcid": "4420", 00:15:26.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.001 "psk": "/tmp/tmp.R3XR65JnTD", 00:15:26.001 "method": "bdev_nvme_attach_controller", 00:15:26.001 "req_id": 1 00:15:26.001 } 00:15:26.001 Got JSON-RPC error response 00:15:26.001 response: 00:15:26.001 { 00:15:26.001 "code": -1, 00:15:26.001 "message": "Operation not permitted" 00:15:26.001 } 00:15:26.001 19:46:06 -- target/tls.sh@36 -- # killprocess 1703802 00:15:26.001 19:46:06 -- common/autotest_common.sh@936 -- # '[' -z 1703802 ']' 00:15:26.001 19:46:06 -- common/autotest_common.sh@940 -- # kill -0 1703802 00:15:26.001 19:46:06 -- common/autotest_common.sh@941 -- # uname 00:15:26.001 19:46:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.001 19:46:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1703802 00:15:26.001 19:46:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:26.001 19:46:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:26.001 19:46:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1703802' 00:15:26.001 killing process with pid 1703802 00:15:26.001 19:46:06 -- common/autotest_common.sh@955 -- # kill 1703802 00:15:26.001 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.001 00:15:26.001 Latency(us) 00:15:26.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.001 =================================================================================================================== 00:15:26.001 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:26.001 19:46:06 -- common/autotest_common.sh@960 -- # wait 1703802 00:15:26.001 19:46:06 -- target/tls.sh@37 -- # return 1 00:15:26.001 19:46:06 -- common/autotest_common.sh@641 -- # es=1 00:15:26.001 19:46:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:26.001 19:46:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:26.001 19:46:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:26.001 19:46:06 -- target/tls.sh@174 -- # killprocess 1702187 00:15:26.001 19:46:06 -- common/autotest_common.sh@936 -- # '[' -z 1702187 ']' 00:15:26.001 19:46:06 -- common/autotest_common.sh@940 -- # kill -0 1702187 00:15:26.001 19:46:06 -- common/autotest_common.sh@941 -- # uname 00:15:26.001 19:46:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:26.001 19:46:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1702187 00:15:26.001 19:46:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:26.001 19:46:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:26.001 19:46:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1702187' 00:15:26.001 killing process with pid 1702187 00:15:26.001 19:46:06 -- common/autotest_common.sh@955 -- # kill 1702187 00:15:26.001 [2024-04-24 19:46:06.527260] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:26.001 19:46:06 -- common/autotest_common.sh@960 -- # wait 1702187 00:15:26.001 19:46:06 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:26.001 19:46:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:26.001 19:46:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:26.001 19:46:06 -- common/autotest_common.sh@10 -- # set +x 00:15:26.001 19:46:06 -- nvmf/common.sh@470 -- # nvmfpid=1703946 00:15:26.001 19:46:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:26.001 19:46:06 -- nvmf/common.sh@471 -- # waitforlisten 1703946 00:15:26.001 19:46:06 -- common/autotest_common.sh@817 -- # '[' -z 1703946 ']' 00:15:26.001 19:46:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.001 19:46:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:26.001 19:46:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.001 19:46:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:26.001 19:46:06 -- common/autotest_common.sh@10 -- # set +x 00:15:26.001 [2024-04-24 19:46:06.867673] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:26.001 [2024-04-24 19:46:06.867786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.001 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.001 [2024-04-24 19:46:06.937789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.001 [2024-04-24 19:46:07.051112] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.001 [2024-04-24 19:46:07.051182] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.001 [2024-04-24 19:46:07.051211] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.001 [2024-04-24 19:46:07.051225] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.001 [2024-04-24 19:46:07.051237] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.001 [2024-04-24 19:46:07.051273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.567 19:46:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:26.567 19:46:07 -- common/autotest_common.sh@850 -- # return 0 00:15:26.567 19:46:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:26.567 19:46:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:26.567 19:46:07 -- common/autotest_common.sh@10 -- # set +x 00:15:26.567 19:46:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.567 19:46:07 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.R3XR65JnTD 00:15:26.567 19:46:07 -- common/autotest_common.sh@638 -- # local es=0 00:15:26.567 19:46:07 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.R3XR65JnTD 00:15:26.567 19:46:07 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:15:26.567 19:46:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:26.567 19:46:07 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:15:26.567 19:46:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:26.567 19:46:07 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.R3XR65JnTD 00:15:26.567 19:46:07 -- target/tls.sh@49 -- # local key=/tmp/tmp.R3XR65JnTD 00:15:26.567 19:46:07 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:26.567 [2024-04-24 19:46:08.043692] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.567 19:46:08 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:27.134 19:46:08 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:27.134 [2024-04-24 19:46:08.581141] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:27.134 [2024-04-24 19:46:08.581385] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.134 19:46:08 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:27.393 malloc0 00:15:27.393 19:46:08 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:27.960 19:46:09 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R3XR65JnTD 00:15:27.960 [2024-04-24 19:46:09.450390] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:27.960 [2024-04-24 19:46:09.450439] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:27.960 [2024-04-24 19:46:09.450470] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:15:27.960 request: 00:15:27.960 { 00:15:27.960 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.960 "host": "nqn.2016-06.io.spdk:host1", 00:15:27.960 "psk": "/tmp/tmp.R3XR65JnTD", 00:15:27.960 "method": "nvmf_subsystem_add_host", 00:15:27.960 "req_id": 1 00:15:27.960 } 00:15:27.960 Got JSON-RPC error response 00:15:27.960 response: 00:15:27.960 { 00:15:27.960 "code": -32603, 00:15:27.960 "message": "Internal error" 00:15:27.960 } 00:15:27.960 19:46:09 -- common/autotest_common.sh@641 -- # es=1 00:15:27.960 19:46:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:27.960 19:46:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:27.960 19:46:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:27.960 19:46:09 -- target/tls.sh@180 -- # killprocess 1703946 00:15:27.960 19:46:09 -- common/autotest_common.sh@936 -- # '[' -z 1703946 ']' 00:15:27.960 19:46:09 -- common/autotest_common.sh@940 -- # kill -0 1703946 00:15:27.960 19:46:09 -- common/autotest_common.sh@941 -- # uname 00:15:28.219 19:46:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.219 19:46:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1703946 00:15:28.219 19:46:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:28.219 19:46:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:28.219 19:46:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1703946' 00:15:28.219 killing process with pid 1703946 00:15:28.219 19:46:09 -- common/autotest_common.sh@955 -- # kill 1703946 00:15:28.219 19:46:09 -- common/autotest_common.sh@960 -- # wait 1703946 00:15:28.480 19:46:09 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.R3XR65JnTD 00:15:28.480 19:46:09 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:28.480 19:46:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:28.480 19:46:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:28.480 19:46:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.480 19:46:09 -- nvmf/common.sh@470 -- # nvmfpid=1704371 00:15:28.480 19:46:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:28.480 19:46:09 -- nvmf/common.sh@471 -- # waitforlisten 1704371 00:15:28.480 19:46:09 -- common/autotest_common.sh@817 -- # '[' -z 1704371 ']' 00:15:28.480 19:46:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.480 19:46:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:28.480 19:46:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.480 19:46:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:28.480 19:46:09 -- common/autotest_common.sh@10 -- # set +x 00:15:28.480 [2024-04-24 19:46:09.851355] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:28.480 [2024-04-24 19:46:09.851440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.480 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.480 [2024-04-24 19:46:09.923101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.738 [2024-04-24 19:46:10.035673] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.739 [2024-04-24 19:46:10.035732] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.739 [2024-04-24 19:46:10.035746] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.739 [2024-04-24 19:46:10.035757] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.739 [2024-04-24 19:46:10.035767] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.739 [2024-04-24 19:46:10.035806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.739 19:46:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:28.739 19:46:10 -- common/autotest_common.sh@850 -- # return 0 00:15:28.739 19:46:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:28.739 19:46:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:28.739 19:46:10 -- common/autotest_common.sh@10 -- # set +x 00:15:28.739 19:46:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.739 19:46:10 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.R3XR65JnTD 00:15:28.739 19:46:10 -- target/tls.sh@49 -- # local key=/tmp/tmp.R3XR65JnTD 00:15:28.739 19:46:10 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:28.996 [2024-04-24 19:46:10.453356] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.996 19:46:10 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:29.255 19:46:10 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:29.824 [2024-04-24 19:46:11.034922] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:29.824 [2024-04-24 19:46:11.035174] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.824 19:46:11 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:29.824 malloc0 00:15:30.083 19:46:11 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:30.341 19:46:11 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R3XR65JnTD 00:15:30.600 [2024-04-24 19:46:11.889252] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:30.600 19:46:11 -- target/tls.sh@188 -- # bdevperf_pid=1704544 00:15:30.600 19:46:11 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:30.600 19:46:11 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:30.600 19:46:11 -- target/tls.sh@191 -- # waitforlisten 1704544 /var/tmp/bdevperf.sock 00:15:30.600 19:46:11 -- common/autotest_common.sh@817 -- # '[' -z 1704544 ']' 00:15:30.600 19:46:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.600 19:46:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:30.600 19:46:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.600 19:46:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:30.600 19:46:11 -- common/autotest_common.sh@10 -- # set +x 00:15:30.600 [2024-04-24 19:46:11.952973] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:30.600 [2024-04-24 19:46:11.953082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1704544 ] 00:15:30.600 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.600 [2024-04-24 19:46:12.019016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.858 [2024-04-24 19:46:12.127993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.858 19:46:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:30.858 19:46:12 -- common/autotest_common.sh@850 -- # return 0 00:15:30.858 19:46:12 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R3XR65JnTD 00:15:31.117 [2024-04-24 19:46:12.461378] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.117 [2024-04-24 19:46:12.461508] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:31.117 TLSTESTn1 00:15:31.117 19:46:12 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:15:31.690 19:46:12 -- target/tls.sh@196 -- # tgtconf='{ 00:15:31.690 "subsystems": [ 00:15:31.690 { 00:15:31.690 "subsystem": "keyring", 00:15:31.690 "config": [] 00:15:31.690 }, 00:15:31.690 { 00:15:31.690 "subsystem": "iobuf", 00:15:31.690 "config": [ 00:15:31.690 { 00:15:31.690 "method": "iobuf_set_options", 00:15:31.690 "params": { 00:15:31.690 "small_pool_count": 8192, 00:15:31.690 "large_pool_count": 1024, 00:15:31.690 "small_bufsize": 8192, 00:15:31.690 "large_bufsize": 135168 00:15:31.690 } 00:15:31.690 } 00:15:31.690 ] 00:15:31.690 }, 00:15:31.690 { 00:15:31.690 "subsystem": "sock", 00:15:31.690 "config": [ 00:15:31.690 { 00:15:31.690 "method": "sock_impl_set_options", 00:15:31.690 "params": { 00:15:31.690 "impl_name": "posix", 00:15:31.690 "recv_buf_size": 2097152, 00:15:31.690 "send_buf_size": 2097152, 00:15:31.690 "enable_recv_pipe": true, 00:15:31.690 "enable_quickack": false, 00:15:31.690 "enable_placement_id": 0, 00:15:31.690 "enable_zerocopy_send_server": true, 00:15:31.690 "enable_zerocopy_send_client": false, 00:15:31.690 "zerocopy_threshold": 0, 00:15:31.690 "tls_version": 0, 00:15:31.690 "enable_ktls": false 00:15:31.690 } 00:15:31.690 }, 00:15:31.690 { 00:15:31.690 "method": "sock_impl_set_options", 00:15:31.690 "params": { 00:15:31.690 "impl_name": "ssl", 00:15:31.690 "recv_buf_size": 4096, 00:15:31.690 "send_buf_size": 4096, 00:15:31.690 "enable_recv_pipe": true, 00:15:31.690 "enable_quickack": false, 00:15:31.690 "enable_placement_id": 0, 00:15:31.690 "enable_zerocopy_send_server": true, 00:15:31.690 "enable_zerocopy_send_client": false, 00:15:31.690 "zerocopy_threshold": 0, 00:15:31.690 "tls_version": 0, 00:15:31.690 "enable_ktls": false 00:15:31.690 } 00:15:31.690 } 00:15:31.690 ] 00:15:31.690 }, 00:15:31.690 { 00:15:31.690 "subsystem": "vmd", 00:15:31.690 "config": [] 00:15:31.690 }, 00:15:31.690 { 00:15:31.690 "subsystem": "accel", 00:15:31.690 "config": [ 00:15:31.690 { 00:15:31.690 "method": "accel_set_options", 00:15:31.690 "params": { 00:15:31.690 "small_cache_size": 128, 00:15:31.690 "large_cache_size": 16, 00:15:31.690 "task_count": 2048, 00:15:31.690 "sequence_count": 2048, 00:15:31.690 "buf_count": 2048 00:15:31.690 } 00:15:31.690 } 00:15:31.690 ] 00:15:31.690 }, 00:15:31.690 { 00:15:31.691 "subsystem": "bdev", 00:15:31.691 "config": [ 00:15:31.691 { 00:15:31.691 "method": "bdev_set_options", 00:15:31.691 "params": { 00:15:31.691 "bdev_io_pool_size": 65535, 00:15:31.691 "bdev_io_cache_size": 256, 00:15:31.691 "bdev_auto_examine": true, 00:15:31.691 "iobuf_small_cache_size": 128, 00:15:31.691 "iobuf_large_cache_size": 16 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "bdev_raid_set_options", 00:15:31.691 "params": { 00:15:31.691 "process_window_size_kb": 1024 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "bdev_iscsi_set_options", 00:15:31.691 "params": { 00:15:31.691 "timeout_sec": 30 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "bdev_nvme_set_options", 00:15:31.691 "params": { 00:15:31.691 "action_on_timeout": "none", 00:15:31.691 "timeout_us": 0, 00:15:31.691 "timeout_admin_us": 0, 00:15:31.691 "keep_alive_timeout_ms": 10000, 00:15:31.691 "arbitration_burst": 0, 00:15:31.691 "low_priority_weight": 0, 00:15:31.691 "medium_priority_weight": 0, 00:15:31.691 "high_priority_weight": 0, 00:15:31.691 "nvme_adminq_poll_period_us": 10000, 00:15:31.691 "nvme_ioq_poll_period_us": 0, 00:15:31.691 "io_queue_requests": 0, 00:15:31.691 "delay_cmd_submit": true, 00:15:31.691 "transport_retry_count": 4, 00:15:31.691 "bdev_retry_count": 3, 00:15:31.691 "transport_ack_timeout": 0, 00:15:31.691 "ctrlr_loss_timeout_sec": 0, 00:15:31.691 "reconnect_delay_sec": 0, 00:15:31.691 "fast_io_fail_timeout_sec": 0, 00:15:31.691 "disable_auto_failback": false, 00:15:31.691 "generate_uuids": false, 00:15:31.691 "transport_tos": 0, 00:15:31.691 "nvme_error_stat": false, 00:15:31.691 "rdma_srq_size": 0, 00:15:31.691 "io_path_stat": false, 00:15:31.691 "allow_accel_sequence": false, 00:15:31.691 "rdma_max_cq_size": 0, 00:15:31.691 "rdma_cm_event_timeout_ms": 0, 00:15:31.691 "dhchap_digests": [ 00:15:31.691 "sha256", 00:15:31.691 "sha384", 00:15:31.691 "sha512" 00:15:31.691 ], 00:15:31.691 "dhchap_dhgroups": [ 00:15:31.691 "null", 00:15:31.691 "ffdhe2048", 00:15:31.691 "ffdhe3072", 00:15:31.691 "ffdhe4096", 00:15:31.691 "ffdhe6144", 00:15:31.691 "ffdhe8192" 00:15:31.691 ] 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "bdev_nvme_set_hotplug", 00:15:31.691 "params": { 00:15:31.691 "period_us": 100000, 00:15:31.691 "enable": false 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "bdev_malloc_create", 00:15:31.691 "params": { 00:15:31.691 "name": "malloc0", 00:15:31.691 "num_blocks": 8192, 00:15:31.691 "block_size": 4096, 00:15:31.691 "physical_block_size": 4096, 00:15:31.691 "uuid": "4f3b006b-da8c-4675-a743-52336cdcdba0", 00:15:31.691 "optimal_io_boundary": 0 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "bdev_wait_for_examine" 00:15:31.691 } 00:15:31.691 ] 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "subsystem": "nbd", 00:15:31.691 "config": [] 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "subsystem": "scheduler", 00:15:31.691 "config": [ 00:15:31.691 { 00:15:31.691 "method": "framework_set_scheduler", 00:15:31.691 "params": { 00:15:31.691 "name": "static" 00:15:31.691 } 00:15:31.691 } 00:15:31.691 ] 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "subsystem": "nvmf", 00:15:31.691 "config": [ 00:15:31.691 { 00:15:31.691 "method": "nvmf_set_config", 00:15:31.691 "params": { 00:15:31.691 "discovery_filter": "match_any", 00:15:31.691 "admin_cmd_passthru": { 00:15:31.691 "identify_ctrlr": false 00:15:31.691 } 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "nvmf_set_max_subsystems", 00:15:31.691 "params": { 00:15:31.691 "max_subsystems": 1024 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "nvmf_set_crdt", 00:15:31.691 "params": { 00:15:31.691 "crdt1": 0, 00:15:31.691 "crdt2": 0, 00:15:31.691 "crdt3": 0 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "nvmf_create_transport", 00:15:31.691 "params": { 00:15:31.691 "trtype": "TCP", 00:15:31.691 "max_queue_depth": 128, 00:15:31.691 "max_io_qpairs_per_ctrlr": 127, 00:15:31.691 "in_capsule_data_size": 4096, 00:15:31.691 "max_io_size": 131072, 00:15:31.691 "io_unit_size": 131072, 00:15:31.691 "max_aq_depth": 128, 00:15:31.691 "num_shared_buffers": 511, 00:15:31.691 "buf_cache_size": 4294967295, 00:15:31.691 "dif_insert_or_strip": false, 00:15:31.691 "zcopy": false, 00:15:31.691 "c2h_success": false, 00:15:31.691 "sock_priority": 0, 00:15:31.691 "abort_timeout_sec": 1, 00:15:31.691 "ack_timeout": 0, 00:15:31.691 "data_wr_pool_size": 0 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "nvmf_create_subsystem", 00:15:31.691 "params": { 00:15:31.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.691 "allow_any_host": false, 00:15:31.691 "serial_number": "SPDK00000000000001", 00:15:31.691 "model_number": "SPDK bdev Controller", 00:15:31.691 "max_namespaces": 10, 00:15:31.691 "min_cntlid": 1, 00:15:31.691 "max_cntlid": 65519, 00:15:31.691 "ana_reporting": false 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "nvmf_subsystem_add_host", 00:15:31.691 "params": { 00:15:31.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.691 "host": "nqn.2016-06.io.spdk:host1", 00:15:31.691 "psk": "/tmp/tmp.R3XR65JnTD" 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "nvmf_subsystem_add_ns", 00:15:31.691 "params": { 00:15:31.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.691 "namespace": { 00:15:31.691 "nsid": 1, 00:15:31.691 "bdev_name": "malloc0", 00:15:31.691 "nguid": "4F3B006BDA8C4675A74352336CDCDBA0", 00:15:31.691 "uuid": "4f3b006b-da8c-4675-a743-52336cdcdba0", 00:15:31.691 "no_auto_visible": false 00:15:31.691 } 00:15:31.691 } 00:15:31.691 }, 00:15:31.691 { 00:15:31.691 "method": "nvmf_subsystem_add_listener", 00:15:31.691 "params": { 00:15:31.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.691 "listen_address": { 00:15:31.691 "trtype": "TCP", 00:15:31.691 "adrfam": "IPv4", 00:15:31.691 "traddr": "10.0.0.2", 00:15:31.691 "trsvcid": "4420" 00:15:31.691 }, 00:15:31.691 "secure_channel": true 00:15:31.691 } 00:15:31.691 } 00:15:31.691 ] 00:15:31.691 } 00:15:31.691 ] 00:15:31.691 }' 00:15:31.691 19:46:12 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:31.949 19:46:13 -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:31.949 "subsystems": [ 00:15:31.949 { 00:15:31.949 "subsystem": "keyring", 00:15:31.949 "config": [] 00:15:31.949 }, 00:15:31.949 { 00:15:31.949 "subsystem": "iobuf", 00:15:31.949 "config": [ 00:15:31.949 { 00:15:31.949 "method": "iobuf_set_options", 00:15:31.949 "params": { 00:15:31.949 "small_pool_count": 8192, 00:15:31.949 "large_pool_count": 1024, 00:15:31.949 "small_bufsize": 8192, 00:15:31.949 "large_bufsize": 135168 00:15:31.949 } 00:15:31.949 } 00:15:31.949 ] 00:15:31.949 }, 00:15:31.949 { 00:15:31.949 "subsystem": "sock", 00:15:31.949 "config": [ 00:15:31.949 { 00:15:31.949 "method": "sock_impl_set_options", 00:15:31.949 "params": { 00:15:31.949 "impl_name": "posix", 00:15:31.949 "recv_buf_size": 2097152, 00:15:31.949 "send_buf_size": 2097152, 00:15:31.949 "enable_recv_pipe": true, 00:15:31.949 "enable_quickack": false, 00:15:31.949 "enable_placement_id": 0, 00:15:31.949 "enable_zerocopy_send_server": true, 00:15:31.949 "enable_zerocopy_send_client": false, 00:15:31.949 "zerocopy_threshold": 0, 00:15:31.949 "tls_version": 0, 00:15:31.949 "enable_ktls": false 00:15:31.949 } 00:15:31.949 }, 00:15:31.949 { 00:15:31.949 "method": "sock_impl_set_options", 00:15:31.949 "params": { 00:15:31.949 "impl_name": "ssl", 00:15:31.949 "recv_buf_size": 4096, 00:15:31.949 "send_buf_size": 4096, 00:15:31.949 "enable_recv_pipe": true, 00:15:31.949 "enable_quickack": false, 00:15:31.949 "enable_placement_id": 0, 00:15:31.949 "enable_zerocopy_send_server": true, 00:15:31.949 "enable_zerocopy_send_client": false, 00:15:31.949 "zerocopy_threshold": 0, 00:15:31.949 "tls_version": 0, 00:15:31.949 "enable_ktls": false 00:15:31.949 } 00:15:31.949 } 00:15:31.949 ] 00:15:31.949 }, 00:15:31.949 { 00:15:31.949 "subsystem": "vmd", 00:15:31.949 "config": [] 00:15:31.949 }, 00:15:31.949 { 00:15:31.949 "subsystem": "accel", 00:15:31.949 "config": [ 00:15:31.949 { 00:15:31.949 "method": "accel_set_options", 00:15:31.949 "params": { 00:15:31.949 "small_cache_size": 128, 00:15:31.949 "large_cache_size": 16, 00:15:31.949 "task_count": 2048, 00:15:31.949 "sequence_count": 2048, 00:15:31.949 "buf_count": 2048 00:15:31.949 } 00:15:31.949 } 00:15:31.949 ] 00:15:31.949 }, 00:15:31.949 { 00:15:31.949 "subsystem": "bdev", 00:15:31.949 "config": [ 00:15:31.949 { 00:15:31.949 "method": "bdev_set_options", 00:15:31.949 "params": { 00:15:31.949 "bdev_io_pool_size": 65535, 00:15:31.949 "bdev_io_cache_size": 256, 00:15:31.949 "bdev_auto_examine": true, 00:15:31.949 "iobuf_small_cache_size": 128, 00:15:31.949 "iobuf_large_cache_size": 16 00:15:31.950 } 00:15:31.950 }, 00:15:31.950 { 00:15:31.950 "method": "bdev_raid_set_options", 00:15:31.950 "params": { 00:15:31.950 "process_window_size_kb": 1024 00:15:31.950 } 00:15:31.950 }, 00:15:31.950 { 00:15:31.950 "method": "bdev_iscsi_set_options", 00:15:31.950 "params": { 00:15:31.950 "timeout_sec": 30 00:15:31.950 } 00:15:31.950 }, 00:15:31.950 { 00:15:31.950 "method": "bdev_nvme_set_options", 00:15:31.950 "params": { 00:15:31.950 "action_on_timeout": "none", 00:15:31.950 "timeout_us": 0, 00:15:31.950 "timeout_admin_us": 0, 00:15:31.950 "keep_alive_timeout_ms": 10000, 00:15:31.950 "arbitration_burst": 0, 00:15:31.950 "low_priority_weight": 0, 00:15:31.950 "medium_priority_weight": 0, 00:15:31.950 "high_priority_weight": 0, 00:15:31.950 "nvme_adminq_poll_period_us": 10000, 00:15:31.950 "nvme_ioq_poll_period_us": 0, 00:15:31.950 "io_queue_requests": 512, 00:15:31.950 "delay_cmd_submit": true, 00:15:31.950 "transport_retry_count": 4, 00:15:31.950 "bdev_retry_count": 3, 00:15:31.950 "transport_ack_timeout": 0, 00:15:31.950 "ctrlr_loss_timeout_sec": 0, 00:15:31.950 "reconnect_delay_sec": 0, 00:15:31.950 "fast_io_fail_timeout_sec": 0, 00:15:31.950 "disable_auto_failback": false, 00:15:31.950 "generate_uuids": false, 00:15:31.950 "transport_tos": 0, 00:15:31.950 "nvme_error_stat": false, 00:15:31.950 "rdma_srq_size": 0, 00:15:31.950 "io_path_stat": false, 00:15:31.950 "allow_accel_sequence": false, 00:15:31.950 "rdma_max_cq_size": 0, 00:15:31.950 "rdma_cm_event_timeout_ms": 0, 00:15:31.950 "dhchap_digests": [ 00:15:31.950 "sha256", 00:15:31.950 "sha384", 00:15:31.950 "sha512" 00:15:31.950 ], 00:15:31.950 "dhchap_dhgroups": [ 00:15:31.950 "null", 00:15:31.950 "ffdhe2048", 00:15:31.950 "ffdhe3072", 00:15:31.950 "ffdhe4096", 00:15:31.950 "ffdhe6144", 00:15:31.950 "ffdhe8192" 00:15:31.950 ] 00:15:31.950 } 00:15:31.950 }, 00:15:31.950 { 00:15:31.950 "method": "bdev_nvme_attach_controller", 00:15:31.950 "params": { 00:15:31.950 "name": "TLSTEST", 00:15:31.950 "trtype": "TCP", 00:15:31.950 "adrfam": "IPv4", 00:15:31.950 "traddr": "10.0.0.2", 00:15:31.950 "trsvcid": "4420", 00:15:31.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.950 "prchk_reftag": false, 00:15:31.950 "prchk_guard": false, 00:15:31.950 "ctrlr_loss_timeout_sec": 0, 00:15:31.950 "reconnect_delay_sec": 0, 00:15:31.950 "fast_io_fail_timeout_sec": 0, 00:15:31.950 "psk": "/tmp/tmp.R3XR65JnTD", 00:15:31.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:31.950 "hdgst": false, 00:15:31.950 "ddgst": false 00:15:31.950 } 00:15:31.950 }, 00:15:31.950 { 00:15:31.950 "method": "bdev_nvme_set_hotplug", 00:15:31.950 "params": { 00:15:31.950 "period_us": 100000, 00:15:31.950 "enable": false 00:15:31.950 } 00:15:31.950 }, 00:15:31.950 { 00:15:31.950 "method": "bdev_wait_for_examine" 00:15:31.950 } 00:15:31.950 ] 00:15:31.950 }, 00:15:31.950 { 00:15:31.950 "subsystem": "nbd", 00:15:31.950 "config": [] 00:15:31.950 } 00:15:31.950 ] 00:15:31.950 }' 00:15:31.950 19:46:13 -- target/tls.sh@199 -- # killprocess 1704544 00:15:31.950 19:46:13 -- common/autotest_common.sh@936 -- # '[' -z 1704544 ']' 00:15:31.950 19:46:13 -- common/autotest_common.sh@940 -- # kill -0 1704544 00:15:31.950 19:46:13 -- common/autotest_common.sh@941 -- # uname 00:15:31.950 19:46:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.950 19:46:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1704544 00:15:31.950 19:46:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:31.950 19:46:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:31.950 19:46:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1704544' 00:15:31.950 killing process with pid 1704544 00:15:31.950 19:46:13 -- common/autotest_common.sh@955 -- # kill 1704544 00:15:31.950 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.950 00:15:31.950 Latency(us) 00:15:31.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.950 =================================================================================================================== 00:15:31.950 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.950 [2024-04-24 19:46:13.282749] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:31.950 19:46:13 -- common/autotest_common.sh@960 -- # wait 1704544 00:15:32.210 19:46:13 -- target/tls.sh@200 -- # killprocess 1704371 00:15:32.210 19:46:13 -- common/autotest_common.sh@936 -- # '[' -z 1704371 ']' 00:15:32.210 19:46:13 -- common/autotest_common.sh@940 -- # kill -0 1704371 00:15:32.210 19:46:13 -- common/autotest_common.sh@941 -- # uname 00:15:32.210 19:46:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.210 19:46:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1704371 00:15:32.210 19:46:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:32.210 19:46:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:32.210 19:46:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1704371' 00:15:32.210 killing process with pid 1704371 00:15:32.210 19:46:13 -- common/autotest_common.sh@955 -- # kill 1704371 00:15:32.210 [2024-04-24 19:46:13.547131] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:32.210 19:46:13 -- common/autotest_common.sh@960 -- # wait 1704371 00:15:32.471 19:46:13 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:32.471 19:46:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:32.471 19:46:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:32.471 19:46:13 -- target/tls.sh@203 -- # echo '{ 00:15:32.471 "subsystems": [ 00:15:32.471 { 00:15:32.471 "subsystem": "keyring", 00:15:32.471 "config": [] 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "subsystem": "iobuf", 00:15:32.471 "config": [ 00:15:32.471 { 00:15:32.471 "method": "iobuf_set_options", 00:15:32.471 "params": { 00:15:32.471 "small_pool_count": 8192, 00:15:32.471 "large_pool_count": 1024, 00:15:32.471 "small_bufsize": 8192, 00:15:32.471 "large_bufsize": 135168 00:15:32.471 } 00:15:32.471 } 00:15:32.471 ] 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "subsystem": "sock", 00:15:32.471 "config": [ 00:15:32.471 { 00:15:32.471 "method": "sock_impl_set_options", 00:15:32.471 "params": { 00:15:32.471 "impl_name": "posix", 00:15:32.471 "recv_buf_size": 2097152, 00:15:32.471 "send_buf_size": 2097152, 00:15:32.471 "enable_recv_pipe": true, 00:15:32.471 "enable_quickack": false, 00:15:32.471 "enable_placement_id": 0, 00:15:32.471 "enable_zerocopy_send_server": true, 00:15:32.471 "enable_zerocopy_send_client": false, 00:15:32.471 "zerocopy_threshold": 0, 00:15:32.471 "tls_version": 0, 00:15:32.471 "enable_ktls": false 00:15:32.471 } 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "method": "sock_impl_set_options", 00:15:32.471 "params": { 00:15:32.471 "impl_name": "ssl", 00:15:32.471 "recv_buf_size": 4096, 00:15:32.471 "send_buf_size": 4096, 00:15:32.471 "enable_recv_pipe": true, 00:15:32.471 "enable_quickack": false, 00:15:32.471 "enable_placement_id": 0, 00:15:32.471 "enable_zerocopy_send_server": true, 00:15:32.471 "enable_zerocopy_send_client": false, 00:15:32.471 "zerocopy_threshold": 0, 00:15:32.471 "tls_version": 0, 00:15:32.471 "enable_ktls": false 00:15:32.471 } 00:15:32.471 } 00:15:32.471 ] 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "subsystem": "vmd", 00:15:32.471 "config": [] 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "subsystem": "accel", 00:15:32.471 "config": [ 00:15:32.471 { 00:15:32.471 "method": "accel_set_options", 00:15:32.471 "params": { 00:15:32.471 "small_cache_size": 128, 00:15:32.471 "large_cache_size": 16, 00:15:32.471 "task_count": 2048, 00:15:32.471 "sequence_count": 2048, 00:15:32.471 "buf_count": 2048 00:15:32.471 } 00:15:32.471 } 00:15:32.471 ] 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "subsystem": "bdev", 00:15:32.471 "config": [ 00:15:32.471 { 00:15:32.471 "method": "bdev_set_options", 00:15:32.471 "params": { 00:15:32.471 "bdev_io_pool_size": 65535, 00:15:32.471 "bdev_io_cache_size": 256, 00:15:32.471 "bdev_auto_examine": true, 00:15:32.471 "iobuf_small_cache_size": 128, 00:15:32.471 "iobuf_large_cache_size": 16 00:15:32.471 } 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "method": "bdev_raid_set_options", 00:15:32.471 "params": { 00:15:32.471 "process_window_size_kb": 1024 00:15:32.471 } 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "method": "bdev_iscsi_set_options", 00:15:32.471 "params": { 00:15:32.471 "timeout_sec": 30 00:15:32.471 } 00:15:32.471 }, 00:15:32.471 { 00:15:32.471 "method": "bdev_nvme_set_options", 00:15:32.471 "params": { 00:15:32.471 "action_on_timeout": "none", 00:15:32.471 "timeout_us": 0, 00:15:32.471 "timeout_admin_us": 0, 00:15:32.471 "keep_alive_timeout_ms": 10000, 00:15:32.471 "arbitration_burst": 0, 00:15:32.471 "low_priority_weight": 0, 00:15:32.471 "medium_priority_weight": 0, 00:15:32.471 "high_priority_weight": 0, 00:15:32.471 "nvme_adminq_poll_period_us": 10000, 00:15:32.471 "nvme_ioq_poll_period_us": 0, 00:15:32.471 "io_queue_requests": 0, 00:15:32.471 "delay_cmd_submit": true, 00:15:32.471 "transport_retry_count": 4, 00:15:32.471 "bdev_retry_count": 3, 00:15:32.471 "transport_ack_timeout": 0, 00:15:32.471 "ctrlr_loss_timeout_sec": 0, 00:15:32.471 "reconnect_delay_sec": 0, 00:15:32.471 "fast_io_fail_timeout_sec": 0, 00:15:32.471 "disable_auto_failback": false, 00:15:32.471 "generate_uuids": false, 00:15:32.471 "transport_tos": 0, 00:15:32.471 "nvme_error_stat": false, 00:15:32.471 "rdma_srq_size": 0, 00:15:32.471 "io_path_stat": false, 00:15:32.471 "allow_accel_sequence": false, 00:15:32.471 "rdma_max_cq_size": 0, 00:15:32.471 "rdma_cm_event_timeout_ms": 0, 00:15:32.471 "dhchap_digests": [ 00:15:32.471 "sha256", 00:15:32.471 "sha384", 00:15:32.471 "sha512" 00:15:32.471 ], 00:15:32.472 "dhchap_dhgroups": [ 00:15:32.472 "null", 00:15:32.472 "ffdhe2048", 00:15:32.472 "ffdhe3072", 00:15:32.472 "ffdhe4096", 00:15:32.472 "ffdhe6144", 00:15:32.472 "ffdhe8192" 00:15:32.472 ] 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "bdev_nvme_set_hotplug", 00:15:32.472 "params": { 00:15:32.472 "period_us": 100000, 00:15:32.472 "enable": false 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "bdev_malloc_create", 00:15:32.472 "params": { 00:15:32.472 "name": "malloc0", 00:15:32.472 "num_blocks": 8192, 00:15:32.472 "block_size": 4096, 00:15:32.472 "physical_block_size": 4096, 00:15:32.472 "uuid": "4f3b006b-da8c-4675-a743-52336cdcdba0", 00:15:32.472 "optimal_io_boundary": 0 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "bdev_wait_for_examine" 00:15:32.472 } 00:15:32.472 ] 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "subsystem": "nbd", 00:15:32.472 "config": [] 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "subsystem": "scheduler", 00:15:32.472 "config": [ 00:15:32.472 { 00:15:32.472 "method": "framework_set_scheduler", 00:15:32.472 "params": { 00:15:32.472 "name": "static" 00:15:32.472 } 00:15:32.472 } 00:15:32.472 ] 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "subsystem": "nvmf", 00:15:32.472 "config": [ 00:15:32.472 { 00:15:32.472 "method": "nvmf_set_config", 00:15:32.472 "params": { 00:15:32.472 "discovery_filter": "match_any", 00:15:32.472 "admin_cmd_passthru": { 00:15:32.472 "identify_ctrlr": false 00:15:32.472 } 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "nvmf_set_max_subsystems", 00:15:32.472 "params": { 00:15:32.472 "max_subsystems": 1024 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "nvmf_set_crdt", 00:15:32.472 "params": { 00:15:32.472 "crdt1": 0, 00:15:32.472 "crdt2": 0, 00:15:32.472 "crdt3": 0 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "nvmf_create_transport", 00:15:32.472 "params": { 00:15:32.472 "trtype": "TCP", 00:15:32.472 "max_queue_depth": 128, 00:15:32.472 "max_io_qpairs_per_ctrlr": 127, 00:15:32.472 "in_capsule_data_size": 4096, 00:15:32.472 "max_io_size": 131072, 00:15:32.472 "io_unit_size": 131072, 00:15:32.472 "max_aq_depth": 128, 00:15:32.472 "num_shared_buffers": 511, 00:15:32.472 "buf_cache_size": 4294967295, 00:15:32.472 "dif_insert_or_strip": false, 00:15:32.472 "zcopy": false, 00:15:32.472 "c2h_success": false, 00:15:32.472 "sock_priority": 0, 00:15:32.472 "abort_timeout_sec": 1, 00:15:32.472 "ack_timeout": 0, 00:15:32.472 "data_wr_pool_size": 0 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "nvmf_create_subsystem", 00:15:32.472 "params": { 00:15:32.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.472 "allow_any_host": false, 00:15:32.472 "serial_number": "SPDK00000000000001", 00:15:32.472 "model_number": "SPDK bdev Controller", 00:15:32.472 "max_namespaces": 10, 00:15:32.472 "min_cntlid": 1, 00:15:32.472 "max_cntlid": 65519, 00:15:32.472 "ana_reporting": false 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "nvmf_subsystem_add_host", 00:15:32.472 "params": { 00:15:32.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.472 "host": "nqn.2016-06.io.spdk:host1", 00:15:32.472 "psk": "/tmp/tmp.R3XR65JnTD" 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "nvmf_subsystem_add_ns", 00:15:32.472 "params": { 00:15:32.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.472 "namespace": { 00:15:32.472 "nsid": 1, 00:15:32.472 "bdev_name": "malloc0", 00:15:32.472 "nguid": "4F3B006BDA8C4675A74352336CDCDBA0", 00:15:32.472 "uuid": "4f3b006b-da8c-4675-a743-52336cdcdba0", 00:15:32.472 "no_auto_visible": false 00:15:32.472 } 00:15:32.472 } 00:15:32.472 }, 00:15:32.472 { 00:15:32.472 "method": "nvmf_subsystem_add_listener", 00:15:32.472 "params": { 00:15:32.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.472 "listen_address": { 00:15:32.472 "trtype": "TCP", 00:15:32.472 "adrfam": "IPv4", 00:15:32.472 "traddr": "10.0.0.2", 00:15:32.472 "trsvcid": "4420" 00:15:32.472 }, 00:15:32.472 "secure_channel": true 00:15:32.472 } 00:15:32.472 } 00:15:32.472 ] 00:15:32.472 } 00:15:32.472 ] 00:15:32.472 }' 00:15:32.472 19:46:13 -- common/autotest_common.sh@10 -- # set +x 00:15:32.472 19:46:13 -- nvmf/common.sh@470 -- # nvmfpid=1704817 00:15:32.472 19:46:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:32.472 19:46:13 -- nvmf/common.sh@471 -- # waitforlisten 1704817 00:15:32.472 19:46:13 -- common/autotest_common.sh@817 -- # '[' -z 1704817 ']' 00:15:32.472 19:46:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.472 19:46:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:32.472 19:46:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.472 19:46:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:32.472 19:46:13 -- common/autotest_common.sh@10 -- # set +x 00:15:32.472 [2024-04-24 19:46:13.902114] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:32.472 [2024-04-24 19:46:13.902196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.472 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.472 [2024-04-24 19:46:13.976337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.731 [2024-04-24 19:46:14.096219] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.731 [2024-04-24 19:46:14.096291] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.731 [2024-04-24 19:46:14.096308] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.731 [2024-04-24 19:46:14.096322] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.731 [2024-04-24 19:46:14.096334] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.731 [2024-04-24 19:46:14.096448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.991 [2024-04-24 19:46:14.327683] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.991 [2024-04-24 19:46:14.343623] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:32.991 [2024-04-24 19:46:14.359692] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:32.991 [2024-04-24 19:46:14.373850] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.558 19:46:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:33.558 19:46:14 -- common/autotest_common.sh@850 -- # return 0 00:15:33.558 19:46:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:33.558 19:46:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:33.558 19:46:14 -- common/autotest_common.sh@10 -- # set +x 00:15:33.558 19:46:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.558 19:46:14 -- target/tls.sh@207 -- # bdevperf_pid=1704970 00:15:33.558 19:46:14 -- target/tls.sh@208 -- # waitforlisten 1704970 /var/tmp/bdevperf.sock 00:15:33.558 19:46:14 -- common/autotest_common.sh@817 -- # '[' -z 1704970 ']' 00:15:33.558 19:46:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.558 19:46:14 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:33.558 19:46:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:33.558 19:46:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.558 19:46:14 -- target/tls.sh@204 -- # echo '{ 00:15:33.558 "subsystems": [ 00:15:33.558 { 00:15:33.558 "subsystem": "keyring", 00:15:33.558 "config": [] 00:15:33.558 }, 00:15:33.558 { 00:15:33.558 "subsystem": "iobuf", 00:15:33.558 "config": [ 00:15:33.558 { 00:15:33.558 "method": "iobuf_set_options", 00:15:33.558 "params": { 00:15:33.558 "small_pool_count": 8192, 00:15:33.558 "large_pool_count": 1024, 00:15:33.558 "small_bufsize": 8192, 00:15:33.558 "large_bufsize": 135168 00:15:33.558 } 00:15:33.558 } 00:15:33.558 ] 00:15:33.558 }, 00:15:33.558 { 00:15:33.558 "subsystem": "sock", 00:15:33.558 "config": [ 00:15:33.558 { 00:15:33.558 "method": "sock_impl_set_options", 00:15:33.558 "params": { 00:15:33.558 "impl_name": "posix", 00:15:33.558 "recv_buf_size": 2097152, 00:15:33.558 "send_buf_size": 2097152, 00:15:33.558 "enable_recv_pipe": true, 00:15:33.559 "enable_quickack": false, 00:15:33.559 "enable_placement_id": 0, 00:15:33.559 "enable_zerocopy_send_server": true, 00:15:33.559 "enable_zerocopy_send_client": false, 00:15:33.559 "zerocopy_threshold": 0, 00:15:33.559 "tls_version": 0, 00:15:33.559 "enable_ktls": false 00:15:33.559 } 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "method": "sock_impl_set_options", 00:15:33.559 "params": { 00:15:33.559 "impl_name": "ssl", 00:15:33.559 "recv_buf_size": 4096, 00:15:33.559 "send_buf_size": 4096, 00:15:33.559 "enable_recv_pipe": true, 00:15:33.559 "enable_quickack": false, 00:15:33.559 "enable_placement_id": 0, 00:15:33.559 "enable_zerocopy_send_server": true, 00:15:33.559 "enable_zerocopy_send_client": false, 00:15:33.559 "zerocopy_threshold": 0, 00:15:33.559 "tls_version": 0, 00:15:33.559 "enable_ktls": false 00:15:33.559 } 00:15:33.559 } 00:15:33.559 ] 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "subsystem": "vmd", 00:15:33.559 "config": [] 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "subsystem": "accel", 00:15:33.559 "config": [ 00:15:33.559 { 00:15:33.559 "method": "accel_set_options", 00:15:33.559 "params": { 00:15:33.559 "small_cache_size": 128, 00:15:33.559 "large_cache_size": 16, 00:15:33.559 "task_count": 2048, 00:15:33.559 "sequence_count": 2048, 00:15:33.559 "buf_count": 2048 00:15:33.559 } 00:15:33.559 } 00:15:33.559 ] 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "subsystem": "bdev", 00:15:33.559 "config": [ 00:15:33.559 { 00:15:33.559 "method": "bdev_set_options", 00:15:33.559 "params": { 00:15:33.559 "bdev_io_pool_size": 65535, 00:15:33.559 "bdev_io_cache_size": 256, 00:15:33.559 "bdev_auto_examine": true, 00:15:33.559 "iobuf_small_cache_size": 128, 00:15:33.559 "iobuf_large_cache_size": 16 00:15:33.559 } 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "method": "bdev_raid_set_options", 00:15:33.559 "params": { 00:15:33.559 "process_window_size_kb": 1024 00:15:33.559 } 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "method": "bdev_iscsi_set_options", 00:15:33.559 "params": { 00:15:33.559 "timeout_sec": 30 00:15:33.559 } 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "method": "bdev_nvme_set_options", 00:15:33.559 "params": { 00:15:33.559 "action_on_timeout": "none", 00:15:33.559 "timeout_us": 0, 00:15:33.559 "timeout_admin_us": 0, 00:15:33.559 "keep_alive_timeout_ms": 10000, 00:15:33.559 "arbitration_burst": 0, 00:15:33.559 "low_priority_weight": 0, 00:15:33.559 "medium_priority_weight": 0, 00:15:33.559 "high_priority_weight": 0, 00:15:33.559 "nvme_adminq_poll_period_us": 10000, 00:15:33.559 "nvme_ioq_poll_period_us": 0, 00:15:33.559 "io_queue_requests": 512, 00:15:33.559 "delay_cmd_submit": true, 00:15:33.559 "transport_retry_count": 4, 00:15:33.559 "bdev_retry_count": 3, 00:15:33.559 "transport_ack_timeout": 0, 00:15:33.559 "ctrlr_loss_timeout_sec": 0, 00:15:33.559 "reconnect_delay_sec": 0, 00:15:33.559 "fast_io_fail_timeout_sec": 0, 00:15:33.559 "disable_auto_failback": false, 00:15:33.559 "generate_uuids": false, 00:15:33.559 "transport_tos": 0, 00:15:33.559 "nvme_error_stat": false, 00:15:33.559 "rdma_srq_size": 0, 00:15:33.559 "io_path_stat": false, 00:15:33.559 "allow_accel_sequence": false, 00:15:33.559 "rdma_max_cq_size": 0, 00:15:33.559 "rdma_cm_event_timeout_ms": 0, 00:15:33.559 "dhchap_digests": [ 00:15:33.559 "sha256", 00:15:33.559 "sha384", 00:15:33.559 "sha512" 00:15:33.559 ], 00:15:33.559 "dhchap_dhgroups": [ 00:15:33.559 "null", 00:15:33.559 "ffdhe2048", 00:15:33.559 "ffdhe3072", 00:15:33.559 "ffdhe4096", 00:15:33.559 "ffdhe6144", 00:15:33.559 "ffdhe8192" 00:15:33.559 ] 00:15:33.559 } 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "method": "bdev_nvme_attach_controller", 00:15:33.559 "params": { 00:15:33.559 "name": "TLSTEST", 00:15:33.559 "trtype": "TCP", 00:15:33.559 "adrfam": "IPv4", 00:15:33.559 "traddr": "10.0.0.2", 00:15:33.559 "trsvcid": "4420", 00:15:33.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.559 "prchk_reftag": false, 00:15:33.559 "prchk_guard": false, 00:15:33.559 "ctrlr_loss_timeout_sec": 0, 00:15:33.559 "reconnect_delay_sec": 0, 00:15:33.559 "fast_io_fail_timeout_sec": 0, 00:15:33.559 "psk": "/tmp/tmp.R3XR65JnTD", 00:15:33.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.559 "hdgst": false, 00:15:33.559 "ddgst": false 00:15:33.559 } 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "method": "bdev_nvme_set_hotplug", 00:15:33.559 "params": { 00:15:33.559 "period_us": 100000, 00:15:33.559 "enable": false 00:15:33.559 } 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "method": "bdev_wait_for_examine" 00:15:33.559 } 00:15:33.559 ] 00:15:33.559 }, 00:15:33.559 { 00:15:33.559 "subsystem": "nbd", 00:15:33.559 "config": [] 00:15:33.559 } 00:15:33.559 ] 00:15:33.559 }' 00:15:33.559 19:46:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:33.559 19:46:14 -- common/autotest_common.sh@10 -- # set +x 00:15:33.559 [2024-04-24 19:46:14.995856] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:33.559 [2024-04-24 19:46:14.995959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1704970 ] 00:15:33.559 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.559 [2024-04-24 19:46:15.053719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.818 [2024-04-24 19:46:15.161049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.818 [2024-04-24 19:46:15.323236] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:33.818 [2024-04-24 19:46:15.323365] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:34.756 19:46:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:34.756 19:46:15 -- common/autotest_common.sh@850 -- # return 0 00:15:34.756 19:46:15 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:34.756 Running I/O for 10 seconds... 00:15:44.740 00:15:44.740 Latency(us) 00:15:44.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.740 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:44.740 Verification LBA range: start 0x0 length 0x2000 00:15:44.740 TLSTESTn1 : 10.07 1584.32 6.19 0.00 0.00 80553.76 8689.59 112624.83 00:15:44.740 =================================================================================================================== 00:15:44.740 Total : 1584.32 6.19 0.00 0.00 80553.76 8689.59 112624.83 00:15:44.740 0 00:15:44.740 19:46:26 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.740 19:46:26 -- target/tls.sh@214 -- # killprocess 1704970 00:15:44.740 19:46:26 -- common/autotest_common.sh@936 -- # '[' -z 1704970 ']' 00:15:44.740 19:46:26 -- common/autotest_common.sh@940 -- # kill -0 1704970 00:15:44.740 19:46:26 -- common/autotest_common.sh@941 -- # uname 00:15:44.740 19:46:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.740 19:46:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1704970 00:15:44.740 19:46:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:44.740 19:46:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:44.740 19:46:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1704970' 00:15:44.740 killing process with pid 1704970 00:15:44.741 19:46:26 -- common/autotest_common.sh@955 -- # kill 1704970 00:15:44.741 Received shutdown signal, test time was about 10.000000 seconds 00:15:44.741 00:15:44.741 Latency(us) 00:15:44.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.741 =================================================================================================================== 00:15:44.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:44.741 [2024-04-24 19:46:26.223767] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:44.741 19:46:26 -- common/autotest_common.sh@960 -- # wait 1704970 00:15:44.999 19:46:26 -- target/tls.sh@215 -- # killprocess 1704817 00:15:44.999 19:46:26 -- common/autotest_common.sh@936 -- # '[' -z 1704817 ']' 00:15:44.999 19:46:26 -- common/autotest_common.sh@940 -- # kill -0 1704817 00:15:44.999 19:46:26 -- common/autotest_common.sh@941 -- # uname 00:15:44.999 19:46:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.999 19:46:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1704817 00:15:45.257 19:46:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:45.257 19:46:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:45.257 19:46:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1704817' 00:15:45.257 killing process with pid 1704817 00:15:45.257 19:46:26 -- common/autotest_common.sh@955 -- # kill 1704817 00:15:45.257 [2024-04-24 19:46:26.519785] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:45.257 19:46:26 -- common/autotest_common.sh@960 -- # wait 1704817 00:15:45.515 19:46:26 -- target/tls.sh@218 -- # nvmfappstart 00:15:45.515 19:46:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:45.515 19:46:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:45.515 19:46:26 -- common/autotest_common.sh@10 -- # set +x 00:15:45.515 19:46:26 -- nvmf/common.sh@470 -- # nvmfpid=1706353 00:15:45.515 19:46:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:45.515 19:46:26 -- nvmf/common.sh@471 -- # waitforlisten 1706353 00:15:45.515 19:46:26 -- common/autotest_common.sh@817 -- # '[' -z 1706353 ']' 00:15:45.515 19:46:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.515 19:46:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:45.515 19:46:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.515 19:46:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:45.515 19:46:26 -- common/autotest_common.sh@10 -- # set +x 00:15:45.515 [2024-04-24 19:46:26.867439] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:45.515 [2024-04-24 19:46:26.867522] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.515 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.515 [2024-04-24 19:46:26.937004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.773 [2024-04-24 19:46:27.051122] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.773 [2024-04-24 19:46:27.051191] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.773 [2024-04-24 19:46:27.051207] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.773 [2024-04-24 19:46:27.051221] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.773 [2024-04-24 19:46:27.051232] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.773 [2024-04-24 19:46:27.051268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.339 19:46:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:46.339 19:46:27 -- common/autotest_common.sh@850 -- # return 0 00:15:46.339 19:46:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:46.339 19:46:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:46.339 19:46:27 -- common/autotest_common.sh@10 -- # set +x 00:15:46.339 19:46:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.339 19:46:27 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.R3XR65JnTD 00:15:46.339 19:46:27 -- target/tls.sh@49 -- # local key=/tmp/tmp.R3XR65JnTD 00:15:46.339 19:46:27 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:46.597 [2024-04-24 19:46:28.034999] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.597 19:46:28 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:46.854 19:46:28 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:47.114 [2024-04-24 19:46:28.540324] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:47.114 [2024-04-24 19:46:28.540571] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.114 19:46:28 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:47.372 malloc0 00:15:47.372 19:46:28 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:47.631 19:46:29 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.R3XR65JnTD 00:15:47.890 [2024-04-24 19:46:29.378552] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:47.890 19:46:29 -- target/tls.sh@222 -- # bdevperf_pid=1706712 00:15:47.890 19:46:29 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:47.890 19:46:29 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:47.890 19:46:29 -- target/tls.sh@225 -- # waitforlisten 1706712 /var/tmp/bdevperf.sock 00:15:47.890 19:46:29 -- common/autotest_common.sh@817 -- # '[' -z 1706712 ']' 00:15:47.890 19:46:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.890 19:46:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:47.890 19:46:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.890 19:46:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:47.890 19:46:29 -- common/autotest_common.sh@10 -- # set +x 00:15:48.149 [2024-04-24 19:46:29.441076] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:48.149 [2024-04-24 19:46:29.441143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1706712 ] 00:15:48.149 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.149 [2024-04-24 19:46:29.502090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.149 [2024-04-24 19:46:29.618118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.407 19:46:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:48.407 19:46:29 -- common/autotest_common.sh@850 -- # return 0 00:15:48.407 19:46:29 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.R3XR65JnTD 00:15:48.664 19:46:30 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:48.923 [2024-04-24 19:46:30.286447] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:48.923 nvme0n1 00:15:48.923 19:46:30 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:49.182 Running I/O for 1 seconds... 00:15:50.126 00:15:50.126 Latency(us) 00:15:50.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.126 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:50.126 Verification LBA range: start 0x0 length 0x2000 00:15:50.126 nvme0n1 : 1.06 1639.54 6.40 0.00 0.00 76031.39 6553.60 118838.61 00:15:50.126 =================================================================================================================== 00:15:50.126 Total : 1639.54 6.40 0.00 0.00 76031.39 6553.60 118838.61 00:15:50.126 0 00:15:50.126 19:46:31 -- target/tls.sh@234 -- # killprocess 1706712 00:15:50.126 19:46:31 -- common/autotest_common.sh@936 -- # '[' -z 1706712 ']' 00:15:50.126 19:46:31 -- common/autotest_common.sh@940 -- # kill -0 1706712 00:15:50.126 19:46:31 -- common/autotest_common.sh@941 -- # uname 00:15:50.126 19:46:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.126 19:46:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1706712 00:15:50.126 19:46:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:50.126 19:46:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:50.126 19:46:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1706712' 00:15:50.126 killing process with pid 1706712 00:15:50.126 19:46:31 -- common/autotest_common.sh@955 -- # kill 1706712 00:15:50.126 Received shutdown signal, test time was about 1.000000 seconds 00:15:50.126 00:15:50.126 Latency(us) 00:15:50.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.126 =================================================================================================================== 00:15:50.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:50.126 19:46:31 -- common/autotest_common.sh@960 -- # wait 1706712 00:15:50.384 19:46:31 -- target/tls.sh@235 -- # killprocess 1706353 00:15:50.384 19:46:31 -- common/autotest_common.sh@936 -- # '[' -z 1706353 ']' 00:15:50.384 19:46:31 -- common/autotest_common.sh@940 -- # kill -0 1706353 00:15:50.384 19:46:31 -- common/autotest_common.sh@941 -- # uname 00:15:50.384 19:46:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.384 19:46:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1706353 00:15:50.384 19:46:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:50.384 19:46:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:50.384 19:46:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1706353' 00:15:50.384 killing process with pid 1706353 00:15:50.384 19:46:31 -- common/autotest_common.sh@955 -- # kill 1706353 00:15:50.384 [2024-04-24 19:46:31.876478] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:50.384 19:46:31 -- common/autotest_common.sh@960 -- # wait 1706353 00:15:50.642 19:46:32 -- target/tls.sh@238 -- # nvmfappstart 00:15:50.642 19:46:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:50.642 19:46:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:50.642 19:46:32 -- common/autotest_common.sh@10 -- # set +x 00:15:50.642 19:46:32 -- nvmf/common.sh@470 -- # nvmfpid=1707002 00:15:50.642 19:46:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:50.642 19:46:32 -- nvmf/common.sh@471 -- # waitforlisten 1707002 00:15:50.642 19:46:32 -- common/autotest_common.sh@817 -- # '[' -z 1707002 ']' 00:15:50.642 19:46:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.642 19:46:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:50.642 19:46:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.642 19:46:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:50.642 19:46:32 -- common/autotest_common.sh@10 -- # set +x 00:15:50.902 [2024-04-24 19:46:32.195992] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:50.902 [2024-04-24 19:46:32.196082] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.902 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.902 [2024-04-24 19:46:32.263353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.902 [2024-04-24 19:46:32.382941] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.902 [2024-04-24 19:46:32.383015] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.902 [2024-04-24 19:46:32.383032] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.902 [2024-04-24 19:46:32.383045] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.902 [2024-04-24 19:46:32.383057] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.902 [2024-04-24 19:46:32.383095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.161 19:46:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:51.161 19:46:32 -- common/autotest_common.sh@850 -- # return 0 00:15:51.161 19:46:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:51.161 19:46:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:51.161 19:46:32 -- common/autotest_common.sh@10 -- # set +x 00:15:51.161 19:46:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.161 19:46:32 -- target/tls.sh@239 -- # rpc_cmd 00:15:51.161 19:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.161 19:46:32 -- common/autotest_common.sh@10 -- # set +x 00:15:51.161 [2024-04-24 19:46:32.532493] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.161 malloc0 00:15:51.161 [2024-04-24 19:46:32.564693] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:51.161 [2024-04-24 19:46:32.564947] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.161 19:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.161 19:46:32 -- target/tls.sh@252 -- # bdevperf_pid=1707138 00:15:51.161 19:46:32 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:51.161 19:46:32 -- target/tls.sh@254 -- # waitforlisten 1707138 /var/tmp/bdevperf.sock 00:15:51.161 19:46:32 -- common/autotest_common.sh@817 -- # '[' -z 1707138 ']' 00:15:51.161 19:46:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:51.161 19:46:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:51.161 19:46:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:51.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:51.161 19:46:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:51.161 19:46:32 -- common/autotest_common.sh@10 -- # set +x 00:15:51.161 [2024-04-24 19:46:32.634062] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:51.161 [2024-04-24 19:46:32.634121] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707138 ] 00:15:51.161 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.419 [2024-04-24 19:46:32.694500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.419 [2024-04-24 19:46:32.811554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.419 19:46:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:51.419 19:46:32 -- common/autotest_common.sh@850 -- # return 0 00:15:51.419 19:46:32 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.R3XR65JnTD 00:15:51.677 19:46:33 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:51.934 [2024-04-24 19:46:33.382262] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:52.192 nvme0n1 00:15:52.192 19:46:33 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.192 Running I/O for 1 seconds... 00:15:53.126 00:15:53.126 Latency(us) 00:15:53.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.126 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.126 Verification LBA range: start 0x0 length 0x2000 00:15:53.126 nvme0n1 : 1.06 1552.31 6.06 0.00 0.00 80334.21 6796.33 128159.29 00:15:53.126 =================================================================================================================== 00:15:53.126 Total : 1552.31 6.06 0.00 0.00 80334.21 6796.33 128159.29 00:15:53.126 0 00:15:53.385 19:46:34 -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:53.385 19:46:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.385 19:46:34 -- common/autotest_common.sh@10 -- # set +x 00:15:53.385 19:46:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.385 19:46:34 -- target/tls.sh@263 -- # tgtcfg='{ 00:15:53.385 "subsystems": [ 00:15:53.385 { 00:15:53.385 "subsystem": "keyring", 00:15:53.385 "config": [ 00:15:53.385 { 00:15:53.385 "method": "keyring_file_add_key", 00:15:53.385 "params": { 00:15:53.385 "name": "key0", 00:15:53.385 "path": "/tmp/tmp.R3XR65JnTD" 00:15:53.385 } 00:15:53.385 } 00:15:53.385 ] 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "subsystem": "iobuf", 00:15:53.385 "config": [ 00:15:53.385 { 00:15:53.385 "method": "iobuf_set_options", 00:15:53.385 "params": { 00:15:53.385 "small_pool_count": 8192, 00:15:53.385 "large_pool_count": 1024, 00:15:53.385 "small_bufsize": 8192, 00:15:53.385 "large_bufsize": 135168 00:15:53.385 } 00:15:53.385 } 00:15:53.385 ] 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "subsystem": "sock", 00:15:53.385 "config": [ 00:15:53.385 { 00:15:53.385 "method": "sock_impl_set_options", 00:15:53.385 "params": { 00:15:53.385 "impl_name": "posix", 00:15:53.385 "recv_buf_size": 2097152, 00:15:53.385 "send_buf_size": 2097152, 00:15:53.385 "enable_recv_pipe": true, 00:15:53.385 "enable_quickack": false, 00:15:53.385 "enable_placement_id": 0, 00:15:53.385 "enable_zerocopy_send_server": true, 00:15:53.385 "enable_zerocopy_send_client": false, 00:15:53.385 "zerocopy_threshold": 0, 00:15:53.385 "tls_version": 0, 00:15:53.385 "enable_ktls": false 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "sock_impl_set_options", 00:15:53.385 "params": { 00:15:53.385 "impl_name": "ssl", 00:15:53.385 "recv_buf_size": 4096, 00:15:53.385 "send_buf_size": 4096, 00:15:53.385 "enable_recv_pipe": true, 00:15:53.385 "enable_quickack": false, 00:15:53.385 "enable_placement_id": 0, 00:15:53.385 "enable_zerocopy_send_server": true, 00:15:53.385 "enable_zerocopy_send_client": false, 00:15:53.385 "zerocopy_threshold": 0, 00:15:53.385 "tls_version": 0, 00:15:53.385 "enable_ktls": false 00:15:53.385 } 00:15:53.385 } 00:15:53.385 ] 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "subsystem": "vmd", 00:15:53.385 "config": [] 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "subsystem": "accel", 00:15:53.385 "config": [ 00:15:53.385 { 00:15:53.385 "method": "accel_set_options", 00:15:53.385 "params": { 00:15:53.385 "small_cache_size": 128, 00:15:53.385 "large_cache_size": 16, 00:15:53.385 "task_count": 2048, 00:15:53.385 "sequence_count": 2048, 00:15:53.385 "buf_count": 2048 00:15:53.385 } 00:15:53.385 } 00:15:53.385 ] 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "subsystem": "bdev", 00:15:53.385 "config": [ 00:15:53.385 { 00:15:53.385 "method": "bdev_set_options", 00:15:53.385 "params": { 00:15:53.385 "bdev_io_pool_size": 65535, 00:15:53.385 "bdev_io_cache_size": 256, 00:15:53.385 "bdev_auto_examine": true, 00:15:53.385 "iobuf_small_cache_size": 128, 00:15:53.385 "iobuf_large_cache_size": 16 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "bdev_raid_set_options", 00:15:53.385 "params": { 00:15:53.385 "process_window_size_kb": 1024 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "bdev_iscsi_set_options", 00:15:53.385 "params": { 00:15:53.385 "timeout_sec": 30 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "bdev_nvme_set_options", 00:15:53.385 "params": { 00:15:53.385 "action_on_timeout": "none", 00:15:53.385 "timeout_us": 0, 00:15:53.385 "timeout_admin_us": 0, 00:15:53.385 "keep_alive_timeout_ms": 10000, 00:15:53.385 "arbitration_burst": 0, 00:15:53.385 "low_priority_weight": 0, 00:15:53.385 "medium_priority_weight": 0, 00:15:53.385 "high_priority_weight": 0, 00:15:53.385 "nvme_adminq_poll_period_us": 10000, 00:15:53.385 "nvme_ioq_poll_period_us": 0, 00:15:53.385 "io_queue_requests": 0, 00:15:53.385 "delay_cmd_submit": true, 00:15:53.385 "transport_retry_count": 4, 00:15:53.385 "bdev_retry_count": 3, 00:15:53.385 "transport_ack_timeout": 0, 00:15:53.385 "ctrlr_loss_timeout_sec": 0, 00:15:53.385 "reconnect_delay_sec": 0, 00:15:53.385 "fast_io_fail_timeout_sec": 0, 00:15:53.385 "disable_auto_failback": false, 00:15:53.385 "generate_uuids": false, 00:15:53.385 "transport_tos": 0, 00:15:53.385 "nvme_error_stat": false, 00:15:53.385 "rdma_srq_size": 0, 00:15:53.385 "io_path_stat": false, 00:15:53.385 "allow_accel_sequence": false, 00:15:53.385 "rdma_max_cq_size": 0, 00:15:53.385 "rdma_cm_event_timeout_ms": 0, 00:15:53.385 "dhchap_digests": [ 00:15:53.385 "sha256", 00:15:53.385 "sha384", 00:15:53.385 "sha512" 00:15:53.385 ], 00:15:53.385 "dhchap_dhgroups": [ 00:15:53.385 "null", 00:15:53.385 "ffdhe2048", 00:15:53.385 "ffdhe3072", 00:15:53.385 "ffdhe4096", 00:15:53.385 "ffdhe6144", 00:15:53.385 "ffdhe8192" 00:15:53.385 ] 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "bdev_nvme_set_hotplug", 00:15:53.385 "params": { 00:15:53.385 "period_us": 100000, 00:15:53.385 "enable": false 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "bdev_malloc_create", 00:15:53.385 "params": { 00:15:53.385 "name": "malloc0", 00:15:53.385 "num_blocks": 8192, 00:15:53.385 "block_size": 4096, 00:15:53.385 "physical_block_size": 4096, 00:15:53.385 "uuid": "ecd7a20c-30ff-4c95-a7aa-e2684c9192a6", 00:15:53.385 "optimal_io_boundary": 0 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "bdev_wait_for_examine" 00:15:53.385 } 00:15:53.385 ] 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "subsystem": "nbd", 00:15:53.385 "config": [] 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "subsystem": "scheduler", 00:15:53.385 "config": [ 00:15:53.385 { 00:15:53.385 "method": "framework_set_scheduler", 00:15:53.385 "params": { 00:15:53.385 "name": "static" 00:15:53.385 } 00:15:53.385 } 00:15:53.385 ] 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "subsystem": "nvmf", 00:15:53.385 "config": [ 00:15:53.385 { 00:15:53.385 "method": "nvmf_set_config", 00:15:53.385 "params": { 00:15:53.385 "discovery_filter": "match_any", 00:15:53.385 "admin_cmd_passthru": { 00:15:53.385 "identify_ctrlr": false 00:15:53.385 } 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "nvmf_set_max_subsystems", 00:15:53.385 "params": { 00:15:53.385 "max_subsystems": 1024 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "nvmf_set_crdt", 00:15:53.385 "params": { 00:15:53.385 "crdt1": 0, 00:15:53.385 "crdt2": 0, 00:15:53.385 "crdt3": 0 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "nvmf_create_transport", 00:15:53.385 "params": { 00:15:53.385 "trtype": "TCP", 00:15:53.385 "max_queue_depth": 128, 00:15:53.385 "max_io_qpairs_per_ctrlr": 127, 00:15:53.385 "in_capsule_data_size": 4096, 00:15:53.385 "max_io_size": 131072, 00:15:53.385 "io_unit_size": 131072, 00:15:53.385 "max_aq_depth": 128, 00:15:53.385 "num_shared_buffers": 511, 00:15:53.385 "buf_cache_size": 4294967295, 00:15:53.385 "dif_insert_or_strip": false, 00:15:53.385 "zcopy": false, 00:15:53.385 "c2h_success": false, 00:15:53.385 "sock_priority": 0, 00:15:53.385 "abort_timeout_sec": 1, 00:15:53.385 "ack_timeout": 0, 00:15:53.385 "data_wr_pool_size": 0 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "nvmf_create_subsystem", 00:15:53.385 "params": { 00:15:53.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.385 "allow_any_host": false, 00:15:53.385 "serial_number": "00000000000000000000", 00:15:53.385 "model_number": "SPDK bdev Controller", 00:15:53.385 "max_namespaces": 32, 00:15:53.385 "min_cntlid": 1, 00:15:53.385 "max_cntlid": 65519, 00:15:53.385 "ana_reporting": false 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "nvmf_subsystem_add_host", 00:15:53.385 "params": { 00:15:53.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.385 "host": "nqn.2016-06.io.spdk:host1", 00:15:53.385 "psk": "key0" 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "nvmf_subsystem_add_ns", 00:15:53.385 "params": { 00:15:53.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.385 "namespace": { 00:15:53.385 "nsid": 1, 00:15:53.385 "bdev_name": "malloc0", 00:15:53.385 "nguid": "ECD7A20C30FF4C95A7AAE2684C9192A6", 00:15:53.385 "uuid": "ecd7a20c-30ff-4c95-a7aa-e2684c9192a6", 00:15:53.385 "no_auto_visible": false 00:15:53.385 } 00:15:53.385 } 00:15:53.385 }, 00:15:53.385 { 00:15:53.385 "method": "nvmf_subsystem_add_listener", 00:15:53.385 "params": { 00:15:53.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.386 "listen_address": { 00:15:53.386 "trtype": "TCP", 00:15:53.386 "adrfam": "IPv4", 00:15:53.386 "traddr": "10.0.0.2", 00:15:53.386 "trsvcid": "4420" 00:15:53.386 }, 00:15:53.386 "secure_channel": true 00:15:53.386 } 00:15:53.386 } 00:15:53.386 ] 00:15:53.386 } 00:15:53.386 ] 00:15:53.386 }' 00:15:53.386 19:46:34 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:53.645 19:46:35 -- target/tls.sh@264 -- # bperfcfg='{ 00:15:53.645 "subsystems": [ 00:15:53.645 { 00:15:53.645 "subsystem": "keyring", 00:15:53.645 "config": [ 00:15:53.645 { 00:15:53.645 "method": "keyring_file_add_key", 00:15:53.645 "params": { 00:15:53.645 "name": "key0", 00:15:53.645 "path": "/tmp/tmp.R3XR65JnTD" 00:15:53.645 } 00:15:53.645 } 00:15:53.645 ] 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "subsystem": "iobuf", 00:15:53.645 "config": [ 00:15:53.645 { 00:15:53.645 "method": "iobuf_set_options", 00:15:53.645 "params": { 00:15:53.645 "small_pool_count": 8192, 00:15:53.645 "large_pool_count": 1024, 00:15:53.645 "small_bufsize": 8192, 00:15:53.645 "large_bufsize": 135168 00:15:53.645 } 00:15:53.645 } 00:15:53.645 ] 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "subsystem": "sock", 00:15:53.645 "config": [ 00:15:53.645 { 00:15:53.645 "method": "sock_impl_set_options", 00:15:53.645 "params": { 00:15:53.645 "impl_name": "posix", 00:15:53.645 "recv_buf_size": 2097152, 00:15:53.645 "send_buf_size": 2097152, 00:15:53.645 "enable_recv_pipe": true, 00:15:53.645 "enable_quickack": false, 00:15:53.645 "enable_placement_id": 0, 00:15:53.645 "enable_zerocopy_send_server": true, 00:15:53.645 "enable_zerocopy_send_client": false, 00:15:53.645 "zerocopy_threshold": 0, 00:15:53.645 "tls_version": 0, 00:15:53.645 "enable_ktls": false 00:15:53.645 } 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "method": "sock_impl_set_options", 00:15:53.645 "params": { 00:15:53.645 "impl_name": "ssl", 00:15:53.645 "recv_buf_size": 4096, 00:15:53.645 "send_buf_size": 4096, 00:15:53.645 "enable_recv_pipe": true, 00:15:53.645 "enable_quickack": false, 00:15:53.645 "enable_placement_id": 0, 00:15:53.645 "enable_zerocopy_send_server": true, 00:15:53.645 "enable_zerocopy_send_client": false, 00:15:53.645 "zerocopy_threshold": 0, 00:15:53.645 "tls_version": 0, 00:15:53.645 "enable_ktls": false 00:15:53.645 } 00:15:53.645 } 00:15:53.645 ] 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "subsystem": "vmd", 00:15:53.645 "config": [] 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "subsystem": "accel", 00:15:53.645 "config": [ 00:15:53.645 { 00:15:53.645 "method": "accel_set_options", 00:15:53.645 "params": { 00:15:53.645 "small_cache_size": 128, 00:15:53.645 "large_cache_size": 16, 00:15:53.645 "task_count": 2048, 00:15:53.645 "sequence_count": 2048, 00:15:53.645 "buf_count": 2048 00:15:53.645 } 00:15:53.645 } 00:15:53.645 ] 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "subsystem": "bdev", 00:15:53.645 "config": [ 00:15:53.645 { 00:15:53.645 "method": "bdev_set_options", 00:15:53.645 "params": { 00:15:53.645 "bdev_io_pool_size": 65535, 00:15:53.645 "bdev_io_cache_size": 256, 00:15:53.645 "bdev_auto_examine": true, 00:15:53.645 "iobuf_small_cache_size": 128, 00:15:53.645 "iobuf_large_cache_size": 16 00:15:53.645 } 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "method": "bdev_raid_set_options", 00:15:53.645 "params": { 00:15:53.645 "process_window_size_kb": 1024 00:15:53.645 } 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "method": "bdev_iscsi_set_options", 00:15:53.645 "params": { 00:15:53.645 "timeout_sec": 30 00:15:53.645 } 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "method": "bdev_nvme_set_options", 00:15:53.645 "params": { 00:15:53.645 "action_on_timeout": "none", 00:15:53.645 "timeout_us": 0, 00:15:53.645 "timeout_admin_us": 0, 00:15:53.645 "keep_alive_timeout_ms": 10000, 00:15:53.645 "arbitration_burst": 0, 00:15:53.645 "low_priority_weight": 0, 00:15:53.645 "medium_priority_weight": 0, 00:15:53.645 "high_priority_weight": 0, 00:15:53.645 "nvme_adminq_poll_period_us": 10000, 00:15:53.645 "nvme_ioq_poll_period_us": 0, 00:15:53.645 "io_queue_requests": 512, 00:15:53.645 "delay_cmd_submit": true, 00:15:53.645 "transport_retry_count": 4, 00:15:53.645 "bdev_retry_count": 3, 00:15:53.645 "transport_ack_timeout": 0, 00:15:53.645 "ctrlr_loss_timeout_sec": 0, 00:15:53.645 "reconnect_delay_sec": 0, 00:15:53.645 "fast_io_fail_timeout_sec": 0, 00:15:53.645 "disable_auto_failback": false, 00:15:53.645 "generate_uuids": false, 00:15:53.645 "transport_tos": 0, 00:15:53.645 "nvme_error_stat": false, 00:15:53.645 "rdma_srq_size": 0, 00:15:53.645 "io_path_stat": false, 00:15:53.645 "allow_accel_sequence": false, 00:15:53.645 "rdma_max_cq_size": 0, 00:15:53.645 "rdma_cm_event_timeout_ms": 0, 00:15:53.645 "dhchap_digests": [ 00:15:53.645 "sha256", 00:15:53.645 "sha384", 00:15:53.645 "sha512" 00:15:53.645 ], 00:15:53.645 "dhchap_dhgroups": [ 00:15:53.645 "null", 00:15:53.645 "ffdhe2048", 00:15:53.645 "ffdhe3072", 00:15:53.645 "ffdhe4096", 00:15:53.645 "ffdhe6144", 00:15:53.645 "ffdhe8192" 00:15:53.645 ] 00:15:53.645 } 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "method": "bdev_nvme_attach_controller", 00:15:53.645 "params": { 00:15:53.645 "name": "nvme0", 00:15:53.645 "trtype": "TCP", 00:15:53.645 "adrfam": "IPv4", 00:15:53.645 "traddr": "10.0.0.2", 00:15:53.645 "trsvcid": "4420", 00:15:53.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.645 "prchk_reftag": false, 00:15:53.645 "prchk_guard": false, 00:15:53.645 "ctrlr_loss_timeout_sec": 0, 00:15:53.645 "reconnect_delay_sec": 0, 00:15:53.645 "fast_io_fail_timeout_sec": 0, 00:15:53.645 "psk": "key0", 00:15:53.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:53.645 "hdgst": false, 00:15:53.645 "ddgst": false 00:15:53.645 } 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "method": "bdev_nvme_set_hotplug", 00:15:53.645 "params": { 00:15:53.645 "period_us": 100000, 00:15:53.645 "enable": false 00:15:53.645 } 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "method": "bdev_enable_histogram", 00:15:53.645 "params": { 00:15:53.645 "name": "nvme0n1", 00:15:53.645 "enable": true 00:15:53.645 } 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "method": "bdev_wait_for_examine" 00:15:53.645 } 00:15:53.645 ] 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "subsystem": "nbd", 00:15:53.646 "config": [] 00:15:53.646 } 00:15:53.646 ] 00:15:53.646 }' 00:15:53.646 19:46:35 -- target/tls.sh@266 -- # killprocess 1707138 00:15:53.646 19:46:35 -- common/autotest_common.sh@936 -- # '[' -z 1707138 ']' 00:15:53.646 19:46:35 -- common/autotest_common.sh@940 -- # kill -0 1707138 00:15:53.646 19:46:35 -- common/autotest_common.sh@941 -- # uname 00:15:53.646 19:46:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.646 19:46:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1707138 00:15:53.646 19:46:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:53.646 19:46:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:53.646 19:46:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1707138' 00:15:53.646 killing process with pid 1707138 00:15:53.646 19:46:35 -- common/autotest_common.sh@955 -- # kill 1707138 00:15:53.646 Received shutdown signal, test time was about 1.000000 seconds 00:15:53.646 00:15:53.646 Latency(us) 00:15:53.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.646 =================================================================================================================== 00:15:53.646 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.646 19:46:35 -- common/autotest_common.sh@960 -- # wait 1707138 00:15:53.906 19:46:35 -- target/tls.sh@267 -- # killprocess 1707002 00:15:53.906 19:46:35 -- common/autotest_common.sh@936 -- # '[' -z 1707002 ']' 00:15:53.906 19:46:35 -- common/autotest_common.sh@940 -- # kill -0 1707002 00:15:53.906 19:46:35 -- common/autotest_common.sh@941 -- # uname 00:15:53.906 19:46:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.906 19:46:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1707002 00:15:53.906 19:46:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:53.906 19:46:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:53.906 19:46:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1707002' 00:15:53.906 killing process with pid 1707002 00:15:53.906 19:46:35 -- common/autotest_common.sh@955 -- # kill 1707002 00:15:53.906 19:46:35 -- common/autotest_common.sh@960 -- # wait 1707002 00:15:54.474 19:46:35 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:54.474 19:46:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:54.474 19:46:35 -- target/tls.sh@269 -- # echo '{ 00:15:54.474 "subsystems": [ 00:15:54.474 { 00:15:54.474 "subsystem": "keyring", 00:15:54.474 "config": [ 00:15:54.474 { 00:15:54.474 "method": "keyring_file_add_key", 00:15:54.474 "params": { 00:15:54.474 "name": "key0", 00:15:54.474 "path": "/tmp/tmp.R3XR65JnTD" 00:15:54.474 } 00:15:54.474 } 00:15:54.474 ] 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "subsystem": "iobuf", 00:15:54.474 "config": [ 00:15:54.474 { 00:15:54.474 "method": "iobuf_set_options", 00:15:54.474 "params": { 00:15:54.474 "small_pool_count": 8192, 00:15:54.474 "large_pool_count": 1024, 00:15:54.474 "small_bufsize": 8192, 00:15:54.474 "large_bufsize": 135168 00:15:54.474 } 00:15:54.474 } 00:15:54.474 ] 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "subsystem": "sock", 00:15:54.474 "config": [ 00:15:54.474 { 00:15:54.474 "method": "sock_impl_set_options", 00:15:54.474 "params": { 00:15:54.474 "impl_name": "posix", 00:15:54.474 "recv_buf_size": 2097152, 00:15:54.474 "send_buf_size": 2097152, 00:15:54.474 "enable_recv_pipe": true, 00:15:54.474 "enable_quickack": false, 00:15:54.474 "enable_placement_id": 0, 00:15:54.474 "enable_zerocopy_send_server": true, 00:15:54.474 "enable_zerocopy_send_client": false, 00:15:54.474 "zerocopy_threshold": 0, 00:15:54.474 "tls_version": 0, 00:15:54.474 "enable_ktls": false 00:15:54.474 } 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "method": "sock_impl_set_options", 00:15:54.474 "params": { 00:15:54.474 "impl_name": "ssl", 00:15:54.474 "recv_buf_size": 4096, 00:15:54.474 "send_buf_size": 4096, 00:15:54.474 "enable_recv_pipe": true, 00:15:54.474 "enable_quickack": false, 00:15:54.474 "enable_placement_id": 0, 00:15:54.474 "enable_zerocopy_send_server": true, 00:15:54.474 "enable_zerocopy_send_client": false, 00:15:54.474 "zerocopy_threshold": 0, 00:15:54.474 "tls_version": 0, 00:15:54.474 "enable_ktls": false 00:15:54.474 } 00:15:54.474 } 00:15:54.474 ] 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "subsystem": "vmd", 00:15:54.474 "config": [] 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "subsystem": "accel", 00:15:54.474 "config": [ 00:15:54.474 { 00:15:54.474 "method": "accel_set_options", 00:15:54.474 "params": { 00:15:54.474 "small_cache_size": 128, 00:15:54.474 "large_cache_size": 16, 00:15:54.474 "task_count": 2048, 00:15:54.474 "sequence_count": 2048, 00:15:54.474 "buf_count": 2048 00:15:54.474 } 00:15:54.474 } 00:15:54.474 ] 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "subsystem": "bdev", 00:15:54.474 "config": [ 00:15:54.474 { 00:15:54.474 "method": "bdev_set_options", 00:15:54.474 "params": { 00:15:54.474 "bdev_io_pool_size": 65535, 00:15:54.474 "bdev_io_cache_size": 256, 00:15:54.474 "bdev_auto_examine": true, 00:15:54.474 "iobuf_small_cache_size": 128, 00:15:54.474 "iobuf_large_cache_size": 16 00:15:54.474 } 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "method": "bdev_raid_set_options", 00:15:54.474 "params": { 00:15:54.474 "process_window_size_kb": 1024 00:15:54.474 } 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "method": "bdev_iscsi_set_options", 00:15:54.474 "params": { 00:15:54.474 "timeout_sec": 30 00:15:54.474 } 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "method": "bdev_nvme_set_options", 00:15:54.474 "params": { 00:15:54.474 "action_on_timeout": "none", 00:15:54.474 "timeout_us": 0, 00:15:54.474 "timeout_admin_us": 0, 00:15:54.474 "keep_alive_timeout_ms": 10000, 00:15:54.474 "arbitration_burst": 0, 00:15:54.474 "low_priority_weight": 0, 00:15:54.474 "medium_priority_weight": 0, 00:15:54.474 "high_priority_weight": 0, 00:15:54.474 "nvme_adminq_poll_period_us": 10000, 00:15:54.474 "nvme_ioq_poll_period_us": 0, 00:15:54.474 "io_queue_requests": 0, 00:15:54.474 "delay_cmd_submit": true, 00:15:54.474 "transport_retry_count": 4, 00:15:54.474 "bdev_retry_count": 3, 00:15:54.474 "transport_ack_timeout": 0, 00:15:54.474 "ctrlr_loss_timeout_sec": 0, 00:15:54.474 "reconnect_delay_sec": 0, 00:15:54.474 "fast_io_fail_timeout_sec": 0, 00:15:54.474 "disable_auto_failback": false, 00:15:54.474 "generate_uuids": false, 00:15:54.474 "transport_tos": 0, 00:15:54.474 "nvme_error_stat": false, 00:15:54.474 "rdma_srq_size": 0, 00:15:54.474 "io_path_stat": false, 00:15:54.474 "allow_accel_sequence": false, 00:15:54.474 "rdma_max_cq_size": 0, 00:15:54.474 "rdma_cm_event_timeout_ms": 0, 00:15:54.474 "dhchap_digests": [ 00:15:54.474 "sha256", 00:15:54.474 "sha384", 00:15:54.474 "sha512" 00:15:54.474 ], 00:15:54.474 "dhchap_dhgroups": [ 00:15:54.474 "null", 00:15:54.474 "ffdhe2048", 00:15:54.474 "ffdhe3072", 00:15:54.474 "ffdhe4096", 00:15:54.474 "ffdhe6144", 00:15:54.474 "ffdhe8192" 00:15:54.474 ] 00:15:54.474 } 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "method": "bdev_nvme_set_hotplug", 00:15:54.474 "params": { 00:15:54.474 "period_us": 100000, 00:15:54.474 "enable": false 00:15:54.474 } 00:15:54.474 }, 00:15:54.474 { 00:15:54.474 "method": "bdev_malloc_create", 00:15:54.474 "params": { 00:15:54.474 "name": "malloc0", 00:15:54.475 "num_blocks": 8192, 00:15:54.475 "block_size": 4096, 00:15:54.475 "physical_block_size": 4096, 00:15:54.475 "uuid": "ecd7a20c-30ff-4c95-a7aa-e2684c9192a6", 00:15:54.475 "optimal_io_boundary": 0 00:15:54.475 } 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "method": "bdev_wait_for_examine" 00:15:54.475 } 00:15:54.475 ] 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "subsystem": "nbd", 00:15:54.475 "config": [] 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "subsystem": "scheduler", 00:15:54.475 "config": [ 00:15:54.475 { 00:15:54.475 "method": "framework_set_scheduler", 00:15:54.475 "params": { 00:15:54.475 "name": "static" 00:15:54.475 } 00:15:54.475 } 00:15:54.475 ] 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "subsystem": "nvmf", 00:15:54.475 "config": [ 00:15:54.475 { 00:15:54.475 "method": "nvmf_set_config", 00:15:54.475 "params": { 00:15:54.475 "discovery_filter": "match_any", 00:15:54.475 "admin_cmd_passthru": { 00:15:54.475 "identify_ctrlr": false 00:15:54.475 } 00:15:54.475 } 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "method": "nvmf_set_max_subsystems", 00:15:54.475 "params": { 00:15:54.475 "max_subsystems": 1024 00:15:54.475 } 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "method": "nvmf_set_crdt", 00:15:54.475 "params": { 00:15:54.475 "crdt1": 0, 00:15:54.475 "crdt2": 0, 00:15:54.475 "crdt3": 0 00:15:54.475 } 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "method": "nvmf_create_transport", 00:15:54.475 "params": { 00:15:54.475 "trtype": "TCP", 00:15:54.475 "max_queue_depth": 128, 00:15:54.475 "max_io_qpairs_per_ctrlr": 127, 00:15:54.475 "in_capsule_data_size": 4096, 00:15:54.475 "max_io_size": 131072, 00:15:54.475 "io_unit_size": 131072, 00:15:54.475 "max_aq_depth": 128, 00:15:54.475 "num_shared_buffers": 511, 00:15:54.475 "buf_cache_size": 4294967295, 00:15:54.475 "dif_insert_or_strip": false, 00:15:54.475 "zcopy": false, 00:15:54.475 "c2h_success": false, 00:15:54.475 "sock_priority": 0, 00:15:54.475 "abort_timeout_sec": 1, 00:15:54.475 "ack_timeout": 0, 00:15:54.475 "data_wr_pool_size": 0 00:15:54.475 } 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "method": "nvmf_create_subsystem", 00:15:54.475 "params": { 00:15:54.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.475 "allow_any_host": false, 00:15:54.475 "serial_number": "00000000000000000000", 00:15:54.475 "model_number": "SPDK bdev Controller", 00:15:54.475 "max_namespaces": 32, 00:15:54.475 "min_cntlid": 1, 00:15:54.475 "max_cntlid": 65519, 00:15:54.475 "ana_reporting": false 00:15:54.475 } 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "method": "nvmf_subsystem_add_host", 00:15:54.475 "params": { 00:15:54.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.475 "host": "nqn.2016-06.io.spdk:host1", 00:15:54.475 "psk": "key0" 00:15:54.475 } 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "method": "nvmf_subsystem_add_ns", 00:15:54.475 "params": { 00:15:54.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.475 "namespace": { 00:15:54.475 "nsid": 1, 00:15:54.475 "bdev_name": "malloc0", 00:15:54.475 "nguid": "ECD7A20C30FF4C95A7AAE2684C9192A6", 00:15:54.475 "uuid": "ecd7a20c-30ff-4c95-a7aa-e2684c9192a6", 00:15:54.475 "no_auto_visible": false 00:15:54.475 } 00:15:54.475 } 00:15:54.475 }, 00:15:54.475 { 00:15:54.475 "method": "nvmf_subsystem_add_listener", 00:15:54.475 "params": { 00:15:54.475 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.475 "listen_address": { 00:15:54.475 "trtype": "TCP", 00:15:54.475 "adrfam": "IPv4", 00:15:54.475 "traddr": "10.0.0.2", 00:15:54.475 "trsvcid": "4420" 00:15:54.475 }, 00:15:54.475 "secure_channel": true 00:15:54.475 } 00:15:54.475 } 00:15:54.475 ] 00:15:54.475 } 00:15:54.475 ] 00:15:54.475 }' 00:15:54.475 19:46:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:54.475 19:46:35 -- common/autotest_common.sh@10 -- # set +x 00:15:54.475 19:46:35 -- nvmf/common.sh@470 -- # nvmfpid=1707433 00:15:54.475 19:46:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:54.475 19:46:35 -- nvmf/common.sh@471 -- # waitforlisten 1707433 00:15:54.475 19:46:35 -- common/autotest_common.sh@817 -- # '[' -z 1707433 ']' 00:15:54.475 19:46:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.475 19:46:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:54.475 19:46:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.475 19:46:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:54.475 19:46:35 -- common/autotest_common.sh@10 -- # set +x 00:15:54.475 [2024-04-24 19:46:35.729976] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:54.475 [2024-04-24 19:46:35.730087] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.475 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.475 [2024-04-24 19:46:35.798741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.475 [2024-04-24 19:46:35.914688] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.475 [2024-04-24 19:46:35.914740] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.475 [2024-04-24 19:46:35.914761] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.475 [2024-04-24 19:46:35.914773] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.475 [2024-04-24 19:46:35.914784] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.475 [2024-04-24 19:46:35.914868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.735 [2024-04-24 19:46:36.155171] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.735 [2024-04-24 19:46:36.187182] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:54.735 [2024-04-24 19:46:36.208852] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.301 19:46:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:55.301 19:46:36 -- common/autotest_common.sh@850 -- # return 0 00:15:55.301 19:46:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:55.301 19:46:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:55.301 19:46:36 -- common/autotest_common.sh@10 -- # set +x 00:15:55.301 19:46:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.301 19:46:36 -- target/tls.sh@272 -- # bdevperf_pid=1707586 00:15:55.301 19:46:36 -- target/tls.sh@273 -- # waitforlisten 1707586 /var/tmp/bdevperf.sock 00:15:55.301 19:46:36 -- common/autotest_common.sh@817 -- # '[' -z 1707586 ']' 00:15:55.301 19:46:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.302 19:46:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:55.302 19:46:36 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:55.302 19:46:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.302 19:46:36 -- target/tls.sh@270 -- # echo '{ 00:15:55.302 "subsystems": [ 00:15:55.302 { 00:15:55.302 "subsystem": "keyring", 00:15:55.302 "config": [ 00:15:55.302 { 00:15:55.302 "method": "keyring_file_add_key", 00:15:55.302 "params": { 00:15:55.302 "name": "key0", 00:15:55.302 "path": "/tmp/tmp.R3XR65JnTD" 00:15:55.302 } 00:15:55.302 } 00:15:55.302 ] 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "subsystem": "iobuf", 00:15:55.302 "config": [ 00:15:55.302 { 00:15:55.302 "method": "iobuf_set_options", 00:15:55.302 "params": { 00:15:55.302 "small_pool_count": 8192, 00:15:55.302 "large_pool_count": 1024, 00:15:55.302 "small_bufsize": 8192, 00:15:55.302 "large_bufsize": 135168 00:15:55.302 } 00:15:55.302 } 00:15:55.302 ] 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "subsystem": "sock", 00:15:55.302 "config": [ 00:15:55.302 { 00:15:55.302 "method": "sock_impl_set_options", 00:15:55.302 "params": { 00:15:55.302 "impl_name": "posix", 00:15:55.302 "recv_buf_size": 2097152, 00:15:55.302 "send_buf_size": 2097152, 00:15:55.302 "enable_recv_pipe": true, 00:15:55.302 "enable_quickack": false, 00:15:55.302 "enable_placement_id": 0, 00:15:55.302 "enable_zerocopy_send_server": true, 00:15:55.302 "enable_zerocopy_send_client": false, 00:15:55.302 "zerocopy_threshold": 0, 00:15:55.302 "tls_version": 0, 00:15:55.302 "enable_ktls": false 00:15:55.302 } 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "method": "sock_impl_set_options", 00:15:55.302 "params": { 00:15:55.302 "impl_name": "ssl", 00:15:55.302 "recv_buf_size": 4096, 00:15:55.302 "send_buf_size": 4096, 00:15:55.302 "enable_recv_pipe": true, 00:15:55.302 "enable_quickack": false, 00:15:55.302 "enable_placement_id": 0, 00:15:55.302 "enable_zerocopy_send_server": true, 00:15:55.302 "enable_zerocopy_send_client": false, 00:15:55.302 "zerocopy_threshold": 0, 00:15:55.302 "tls_version": 0, 00:15:55.302 "enable_ktls": false 00:15:55.302 } 00:15:55.302 } 00:15:55.302 ] 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "subsystem": "vmd", 00:15:55.302 "config": [] 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "subsystem": "accel", 00:15:55.302 "config": [ 00:15:55.302 { 00:15:55.302 "method": "accel_set_options", 00:15:55.302 "params": { 00:15:55.302 "small_cache_size": 128, 00:15:55.302 "large_cache_size": 16, 00:15:55.302 "task_count": 2048, 00:15:55.302 "sequence_count": 2048, 00:15:55.302 "buf_count": 2048 00:15:55.302 } 00:15:55.302 } 00:15:55.302 ] 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "subsystem": "bdev", 00:15:55.302 "config": [ 00:15:55.302 { 00:15:55.302 "method": "bdev_set_options", 00:15:55.302 "params": { 00:15:55.302 "bdev_io_pool_size": 65535, 00:15:55.302 "bdev_io_cache_size": 256, 00:15:55.302 "bdev_auto_examine": true, 00:15:55.302 "iobuf_small_cache_size": 128, 00:15:55.302 "iobuf_large_cache_size": 16 00:15:55.302 } 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "method": "bdev_raid_set_options", 00:15:55.302 "params": { 00:15:55.302 "process_window_size_kb": 1024 00:15:55.302 } 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "method": "bdev_iscsi_set_options", 00:15:55.302 "params": { 00:15:55.302 "timeout_sec": 30 00:15:55.302 } 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "method": "bdev_nvme_set_options", 00:15:55.302 "params": { 00:15:55.302 "action_on_timeout": "none", 00:15:55.302 "timeout_us": 0, 00:15:55.302 "timeout_admin_us": 0, 00:15:55.302 "keep_alive_timeout_ms": 10000, 00:15:55.302 "arbitration_burst": 0, 00:15:55.302 "low_priority_weight": 0, 00:15:55.302 "medium_priority_weight": 0, 00:15:55.302 "high_priority_weight": 0, 00:15:55.302 "nvme_adminq_poll_period_us": 10000, 00:15:55.302 "nvme_ioq_poll_period_us": 0, 00:15:55.302 "io_queue_requests": 512, 00:15:55.302 "delay_cmd_submit": true, 00:15:55.302 "transport_retry_count": 4, 00:15:55.302 "bdev_retry_count": 3, 00:15:55.302 "transport_ack_timeout": 0, 00:15:55.302 "ctrlr_loss_timeout_sec": 0, 00:15:55.302 "reconnect_delay_sec": 0, 00:15:55.302 "fast_io_fail_timeout_sec": 0, 00:15:55.302 "disable_auto_failback": false, 00:15:55.302 "generate_uuids": false, 00:15:55.302 "transport_tos": 0, 00:15:55.302 "nvme_error_stat": false, 00:15:55.302 "rdma_srq_size": 0, 00:15:55.302 "io_path_stat": false, 00:15:55.302 "allow_accel_sequence": false, 00:15:55.302 "rdma_max_cq_size": 0, 00:15:55.302 "rdma_cm_event_timeout_ms": 0, 00:15:55.302 "dhchap_digests": [ 00:15:55.302 "sha256", 00:15:55.302 "sha384", 00:15:55.302 "sha512" 00:15:55.302 ], 00:15:55.302 "dhchap_dhgroups": [ 00:15:55.302 "null", 00:15:55.302 "ffdhe2048", 00:15:55.302 "ffdhe3072", 00:15:55.302 "ffdhe4096", 00:15:55.302 "ffdhe6144", 00:15:55.302 "ffdhe8192" 00:15:55.302 ] 00:15:55.302 } 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "method": "bdev_nvme_attach_controller", 00:15:55.302 "params": { 00:15:55.302 "name": "nvme0", 00:15:55.302 "trtype": "TCP", 00:15:55.302 "adrfam": "IPv4", 00:15:55.302 "traddr": "10.0.0.2", 00:15:55.302 "trsvcid": "4420", 00:15:55.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.302 "prchk_reftag": false, 00:15:55.302 "prchk_guard": false, 00:15:55.302 "ctrlr_loss_timeout_sec": 0, 00:15:55.302 "reconnect_delay_sec": 0, 00:15:55.302 "fast_io_fail_timeout_sec": 0, 00:15:55.302 "psk": "key0", 00:15:55.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:55.302 "hdgst": false, 00:15:55.302 "ddgst": false 00:15:55.302 } 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "method": "bdev_nvme_set_hotplug", 00:15:55.302 "params": { 00:15:55.302 "period_us": 100000, 00:15:55.302 "enable": false 00:15:55.302 } 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "method": "bdev_enable_histogram", 00:15:55.302 "params": { 00:15:55.302 "name": "nvme0n1", 00:15:55.302 "enable": true 00:15:55.302 } 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "method": "bdev_wait_for_examine" 00:15:55.302 } 00:15:55.302 ] 00:15:55.302 }, 00:15:55.302 { 00:15:55.302 "subsystem": "nbd", 00:15:55.302 "config": [] 00:15:55.302 } 00:15:55.302 ] 00:15:55.302 }' 00:15:55.302 19:46:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:55.302 19:46:36 -- common/autotest_common.sh@10 -- # set +x 00:15:55.302 [2024-04-24 19:46:36.734962] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:15:55.302 [2024-04-24 19:46:36.735053] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707586 ] 00:15:55.302 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.302 [2024-04-24 19:46:36.797724] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.561 [2024-04-24 19:46:36.914901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.820 [2024-04-24 19:46:37.090002] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:56.387 19:46:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:56.387 19:46:37 -- common/autotest_common.sh@850 -- # return 0 00:15:56.387 19:46:37 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:56.387 19:46:37 -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:56.653 19:46:37 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.653 19:46:37 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:56.653 Running I/O for 1 seconds... 00:15:57.592 00:15:57.592 Latency(us) 00:15:57.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.592 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:57.592 Verification LBA range: start 0x0 length 0x2000 00:15:57.592 nvme0n1 : 1.07 1557.11 6.08 0.00 0.00 80067.66 8689.59 113401.55 00:15:57.592 =================================================================================================================== 00:15:57.592 Total : 1557.11 6.08 0.00 0.00 80067.66 8689.59 113401.55 00:15:57.592 0 00:15:57.592 19:46:39 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:57.592 19:46:39 -- target/tls.sh@279 -- # cleanup 00:15:57.592 19:46:39 -- target/tls.sh@15 -- # process_shm --id 0 00:15:57.592 19:46:39 -- common/autotest_common.sh@794 -- # type=--id 00:15:57.592 19:46:39 -- common/autotest_common.sh@795 -- # id=0 00:15:57.592 19:46:39 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:57.592 19:46:39 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:57.592 19:46:39 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:57.592 19:46:39 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:57.592 19:46:39 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:57.592 19:46:39 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:57.592 nvmf_trace.0 00:15:57.879 19:46:39 -- common/autotest_common.sh@809 -- # return 0 00:15:57.879 19:46:39 -- target/tls.sh@16 -- # killprocess 1707586 00:15:57.879 19:46:39 -- common/autotest_common.sh@936 -- # '[' -z 1707586 ']' 00:15:57.879 19:46:39 -- common/autotest_common.sh@940 -- # kill -0 1707586 00:15:57.879 19:46:39 -- common/autotest_common.sh@941 -- # uname 00:15:57.879 19:46:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:57.879 19:46:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1707586 00:15:57.879 19:46:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:57.879 19:46:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:57.879 19:46:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1707586' 00:15:57.879 killing process with pid 1707586 00:15:57.879 19:46:39 -- common/autotest_common.sh@955 -- # kill 1707586 00:15:57.879 Received shutdown signal, test time was about 1.000000 seconds 00:15:57.879 00:15:57.879 Latency(us) 00:15:57.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.879 =================================================================================================================== 00:15:57.879 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:57.879 19:46:39 -- common/autotest_common.sh@960 -- # wait 1707586 00:15:58.139 19:46:39 -- target/tls.sh@17 -- # nvmftestfini 00:15:58.139 19:46:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:58.139 19:46:39 -- nvmf/common.sh@117 -- # sync 00:15:58.139 19:46:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.139 19:46:39 -- nvmf/common.sh@120 -- # set +e 00:15:58.139 19:46:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.139 19:46:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.139 rmmod nvme_tcp 00:15:58.139 rmmod nvme_fabrics 00:15:58.139 rmmod nvme_keyring 00:15:58.139 19:46:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.139 19:46:39 -- nvmf/common.sh@124 -- # set -e 00:15:58.139 19:46:39 -- nvmf/common.sh@125 -- # return 0 00:15:58.139 19:46:39 -- nvmf/common.sh@478 -- # '[' -n 1707433 ']' 00:15:58.139 19:46:39 -- nvmf/common.sh@479 -- # killprocess 1707433 00:15:58.139 19:46:39 -- common/autotest_common.sh@936 -- # '[' -z 1707433 ']' 00:15:58.139 19:46:39 -- common/autotest_common.sh@940 -- # kill -0 1707433 00:15:58.139 19:46:39 -- common/autotest_common.sh@941 -- # uname 00:15:58.139 19:46:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.139 19:46:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1707433 00:15:58.139 19:46:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:58.139 19:46:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:58.139 19:46:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1707433' 00:15:58.139 killing process with pid 1707433 00:15:58.139 19:46:39 -- common/autotest_common.sh@955 -- # kill 1707433 00:15:58.139 19:46:39 -- common/autotest_common.sh@960 -- # wait 1707433 00:15:58.397 19:46:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:58.397 19:46:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:58.398 19:46:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:58.398 19:46:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.398 19:46:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.398 19:46:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.398 19:46:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.398 19:46:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.937 19:46:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:00.937 19:46:41 -- target/tls.sh@18 -- # rm -f /tmp/tmp.cHuUKy4YaI /tmp/tmp.5GC57Ucw0e /tmp/tmp.R3XR65JnTD 00:16:00.937 00:16:00.937 real 1m22.444s 00:16:00.937 user 2m10.444s 00:16:00.937 sys 0m28.753s 00:16:00.937 19:46:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:00.937 19:46:41 -- common/autotest_common.sh@10 -- # set +x 00:16:00.937 ************************************ 00:16:00.937 END TEST nvmf_tls 00:16:00.937 ************************************ 00:16:00.937 19:46:41 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:00.937 19:46:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:00.937 19:46:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:00.937 19:46:41 -- common/autotest_common.sh@10 -- # set +x 00:16:00.937 ************************************ 00:16:00.937 START TEST nvmf_fips 00:16:00.937 ************************************ 00:16:00.937 19:46:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:00.937 * Looking for test storage... 00:16:00.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:16:00.937 19:46:42 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.937 19:46:42 -- nvmf/common.sh@7 -- # uname -s 00:16:00.937 19:46:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.937 19:46:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.937 19:46:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.937 19:46:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.937 19:46:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.937 19:46:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.937 19:46:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.937 19:46:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.937 19:46:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.937 19:46:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.937 19:46:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.937 19:46:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.937 19:46:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.937 19:46:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.937 19:46:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.937 19:46:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.938 19:46:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.938 19:46:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.938 19:46:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.938 19:46:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.938 19:46:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.938 19:46:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.938 19:46:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.938 19:46:42 -- paths/export.sh@5 -- # export PATH 00:16:00.938 19:46:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.938 19:46:42 -- nvmf/common.sh@47 -- # : 0 00:16:00.938 19:46:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.938 19:46:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.938 19:46:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.938 19:46:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.938 19:46:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.938 19:46:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.938 19:46:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.938 19:46:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.938 19:46:42 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.938 19:46:42 -- fips/fips.sh@89 -- # check_openssl_version 00:16:00.938 19:46:42 -- fips/fips.sh@83 -- # local target=3.0.0 00:16:00.938 19:46:42 -- fips/fips.sh@85 -- # openssl version 00:16:00.938 19:46:42 -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:00.938 19:46:42 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:00.938 19:46:42 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:00.938 19:46:42 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:00.938 19:46:42 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:00.938 19:46:42 -- scripts/common.sh@333 -- # IFS=.-: 00:16:00.938 19:46:42 -- scripts/common.sh@333 -- # read -ra ver1 00:16:00.938 19:46:42 -- scripts/common.sh@334 -- # IFS=.-: 00:16:00.938 19:46:42 -- scripts/common.sh@334 -- # read -ra ver2 00:16:00.938 19:46:42 -- scripts/common.sh@335 -- # local 'op=>=' 00:16:00.938 19:46:42 -- scripts/common.sh@337 -- # ver1_l=3 00:16:00.938 19:46:42 -- scripts/common.sh@338 -- # ver2_l=3 00:16:00.938 19:46:42 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:00.938 19:46:42 -- scripts/common.sh@341 -- # case "$op" in 00:16:00.938 19:46:42 -- scripts/common.sh@345 -- # : 1 00:16:00.938 19:46:42 -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:00.938 19:46:42 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.938 19:46:42 -- scripts/common.sh@362 -- # decimal 3 00:16:00.938 19:46:42 -- scripts/common.sh@350 -- # local d=3 00:16:00.938 19:46:42 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:00.938 19:46:42 -- scripts/common.sh@352 -- # echo 3 00:16:00.938 19:46:42 -- scripts/common.sh@362 -- # ver1[v]=3 00:16:00.938 19:46:42 -- scripts/common.sh@363 -- # decimal 3 00:16:00.938 19:46:42 -- scripts/common.sh@350 -- # local d=3 00:16:00.938 19:46:42 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:00.938 19:46:42 -- scripts/common.sh@352 -- # echo 3 00:16:00.938 19:46:42 -- scripts/common.sh@363 -- # ver2[v]=3 00:16:00.938 19:46:42 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:00.938 19:46:42 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:00.938 19:46:42 -- scripts/common.sh@361 -- # (( v++ )) 00:16:00.938 19:46:42 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.938 19:46:42 -- scripts/common.sh@362 -- # decimal 0 00:16:00.938 19:46:42 -- scripts/common.sh@350 -- # local d=0 00:16:00.938 19:46:42 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:00.938 19:46:42 -- scripts/common.sh@352 -- # echo 0 00:16:00.938 19:46:42 -- scripts/common.sh@362 -- # ver1[v]=0 00:16:00.938 19:46:42 -- scripts/common.sh@363 -- # decimal 0 00:16:00.938 19:46:42 -- scripts/common.sh@350 -- # local d=0 00:16:00.938 19:46:42 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:00.938 19:46:42 -- scripts/common.sh@352 -- # echo 0 00:16:00.938 19:46:42 -- scripts/common.sh@363 -- # ver2[v]=0 00:16:00.938 19:46:42 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:00.938 19:46:42 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:00.938 19:46:42 -- scripts/common.sh@361 -- # (( v++ )) 00:16:00.938 19:46:42 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.938 19:46:42 -- scripts/common.sh@362 -- # decimal 9 00:16:00.938 19:46:42 -- scripts/common.sh@350 -- # local d=9 00:16:00.938 19:46:42 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:00.938 19:46:42 -- scripts/common.sh@352 -- # echo 9 00:16:00.938 19:46:42 -- scripts/common.sh@362 -- # ver1[v]=9 00:16:00.938 19:46:42 -- scripts/common.sh@363 -- # decimal 0 00:16:00.938 19:46:42 -- scripts/common.sh@350 -- # local d=0 00:16:00.938 19:46:42 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:00.938 19:46:42 -- scripts/common.sh@352 -- # echo 0 00:16:00.938 19:46:42 -- scripts/common.sh@363 -- # ver2[v]=0 00:16:00.938 19:46:42 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:00.938 19:46:42 -- scripts/common.sh@364 -- # return 0 00:16:00.938 19:46:42 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:00.938 19:46:42 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:00.938 19:46:42 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:00.938 19:46:42 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:00.939 19:46:42 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:00.939 19:46:42 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:00.939 19:46:42 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:00.939 19:46:42 -- fips/fips.sh@113 -- # build_openssl_config 00:16:00.939 19:46:42 -- fips/fips.sh@37 -- # cat 00:16:00.939 19:46:42 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:00.939 19:46:42 -- fips/fips.sh@58 -- # cat - 00:16:00.939 19:46:42 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:00.939 19:46:42 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:00.939 19:46:42 -- fips/fips.sh@116 -- # mapfile -t providers 00:16:00.939 19:46:42 -- fips/fips.sh@116 -- # openssl list -providers 00:16:00.939 19:46:42 -- fips/fips.sh@116 -- # grep name 00:16:00.939 19:46:42 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:00.939 19:46:42 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:00.939 19:46:42 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:00.939 19:46:42 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:00.939 19:46:42 -- fips/fips.sh@127 -- # : 00:16:00.939 19:46:42 -- common/autotest_common.sh@638 -- # local es=0 00:16:00.939 19:46:42 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:00.939 19:46:42 -- common/autotest_common.sh@626 -- # local arg=openssl 00:16:00.939 19:46:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:00.939 19:46:42 -- common/autotest_common.sh@630 -- # type -t openssl 00:16:00.939 19:46:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:00.939 19:46:42 -- common/autotest_common.sh@632 -- # type -P openssl 00:16:00.939 19:46:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:00.939 19:46:42 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:16:00.939 19:46:42 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:16:00.939 19:46:42 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:16:00.939 Error setting digest 00:16:00.939 00B2C8F5D87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:00.939 00B2C8F5D87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:00.939 19:46:42 -- common/autotest_common.sh@641 -- # es=1 00:16:00.939 19:46:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:00.939 19:46:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:00.939 19:46:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:00.939 19:46:42 -- fips/fips.sh@130 -- # nvmftestinit 00:16:00.939 19:46:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:00.939 19:46:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.939 19:46:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:00.939 19:46:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:00.939 19:46:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:00.939 19:46:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.939 19:46:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.939 19:46:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.939 19:46:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:00.939 19:46:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:00.939 19:46:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:00.939 19:46:42 -- common/autotest_common.sh@10 -- # set +x 00:16:02.846 19:46:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:02.846 19:46:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:02.846 19:46:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:02.846 19:46:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:02.846 19:46:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:02.846 19:46:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:02.846 19:46:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:02.846 19:46:44 -- nvmf/common.sh@295 -- # net_devs=() 00:16:02.846 19:46:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:02.846 19:46:44 -- nvmf/common.sh@296 -- # e810=() 00:16:02.846 19:46:44 -- nvmf/common.sh@296 -- # local -ga e810 00:16:02.846 19:46:44 -- nvmf/common.sh@297 -- # x722=() 00:16:02.846 19:46:44 -- nvmf/common.sh@297 -- # local -ga x722 00:16:02.846 19:46:44 -- nvmf/common.sh@298 -- # mlx=() 00:16:02.846 19:46:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:02.846 19:46:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:02.847 19:46:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:02.847 19:46:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:02.847 19:46:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:02.847 19:46:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.847 19:46:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:02.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:02.847 19:46:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:02.847 19:46:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:02.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:02.847 19:46:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:02.847 19:46:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.847 19:46:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.847 19:46:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:02.847 19:46:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.847 19:46:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:02.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:02.847 19:46:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.847 19:46:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:02.847 19:46:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:02.847 19:46:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:02.847 19:46:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:02.847 19:46:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:02.847 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:02.847 19:46:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:02.847 19:46:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:02.847 19:46:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:02.847 19:46:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:02.847 19:46:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.847 19:46:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.847 19:46:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:02.847 19:46:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:02.847 19:46:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:02.847 19:46:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:02.847 19:46:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:02.847 19:46:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:02.847 19:46:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.847 19:46:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:02.847 19:46:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:02.847 19:46:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:02.847 19:46:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:02.847 19:46:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:02.847 19:46:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:02.847 19:46:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:02.847 19:46:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:02.847 19:46:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:02.847 19:46:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:02.847 19:46:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:02.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:16:02.847 00:16:02.847 --- 10.0.0.2 ping statistics --- 00:16:02.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.847 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:16:02.847 19:46:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:02.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:16:02.847 00:16:02.847 --- 10.0.0.1 ping statistics --- 00:16:02.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.847 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:02.847 19:46:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.847 19:46:44 -- nvmf/common.sh@411 -- # return 0 00:16:02.847 19:46:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:02.847 19:46:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.847 19:46:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:02.847 19:46:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.847 19:46:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:02.847 19:46:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:02.847 19:46:44 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:02.847 19:46:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:02.847 19:46:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:02.847 19:46:44 -- common/autotest_common.sh@10 -- # set +x 00:16:02.847 19:46:44 -- nvmf/common.sh@470 -- # nvmfpid=1709951 00:16:02.847 19:46:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:02.847 19:46:44 -- nvmf/common.sh@471 -- # waitforlisten 1709951 00:16:02.847 19:46:44 -- common/autotest_common.sh@817 -- # '[' -z 1709951 ']' 00:16:02.847 19:46:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.847 19:46:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:02.847 19:46:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.847 19:46:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:02.847 19:46:44 -- common/autotest_common.sh@10 -- # set +x 00:16:02.847 [2024-04-24 19:46:44.339399] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:16:02.847 [2024-04-24 19:46:44.339479] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.105 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.105 [2024-04-24 19:46:44.404279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.105 [2024-04-24 19:46:44.509535] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.105 [2024-04-24 19:46:44.509585] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.105 [2024-04-24 19:46:44.509608] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.105 [2024-04-24 19:46:44.509641] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.106 [2024-04-24 19:46:44.509652] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.106 [2024-04-24 19:46:44.509686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.059 19:46:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:04.059 19:46:45 -- common/autotest_common.sh@850 -- # return 0 00:16:04.059 19:46:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:04.059 19:46:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:04.059 19:46:45 -- common/autotest_common.sh@10 -- # set +x 00:16:04.059 19:46:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.059 19:46:45 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:04.059 19:46:45 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:04.059 19:46:45 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:04.059 19:46:45 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:04.059 19:46:45 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:04.059 19:46:45 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:04.059 19:46:45 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:04.059 19:46:45 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.059 [2024-04-24 19:46:45.573329] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.318 [2024-04-24 19:46:45.589319] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:04.318 [2024-04-24 19:46:45.589548] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.318 [2024-04-24 19:46:45.621852] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:04.318 malloc0 00:16:04.318 19:46:45 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:04.318 19:46:45 -- fips/fips.sh@147 -- # bdevperf_pid=1710111 00:16:04.318 19:46:45 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:04.318 19:46:45 -- fips/fips.sh@148 -- # waitforlisten 1710111 /var/tmp/bdevperf.sock 00:16:04.318 19:46:45 -- common/autotest_common.sh@817 -- # '[' -z 1710111 ']' 00:16:04.318 19:46:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.318 19:46:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:04.318 19:46:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.318 19:46:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:04.318 19:46:45 -- common/autotest_common.sh@10 -- # set +x 00:16:04.318 [2024-04-24 19:46:45.714461] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:16:04.318 [2024-04-24 19:46:45.714555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1710111 ] 00:16:04.318 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.318 [2024-04-24 19:46:45.774668] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.577 [2024-04-24 19:46:45.883365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.515 19:46:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:05.515 19:46:46 -- common/autotest_common.sh@850 -- # return 0 00:16:05.515 19:46:46 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:05.515 [2024-04-24 19:46:46.885174] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:05.515 [2024-04-24 19:46:46.885312] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:05.515 TLSTESTn1 00:16:05.515 19:46:46 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:05.774 Running I/O for 10 seconds... 00:16:15.764 00:16:15.764 Latency(us) 00:16:15.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.764 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:15.764 Verification LBA range: start 0x0 length 0x2000 00:16:15.764 TLSTESTn1 : 10.07 1588.37 6.20 0.00 0.00 80331.88 8689.59 111071.38 00:16:15.764 =================================================================================================================== 00:16:15.764 Total : 1588.37 6.20 0.00 0.00 80331.88 8689.59 111071.38 00:16:15.764 0 00:16:15.764 19:46:57 -- fips/fips.sh@1 -- # cleanup 00:16:15.764 19:46:57 -- fips/fips.sh@15 -- # process_shm --id 0 00:16:15.764 19:46:57 -- common/autotest_common.sh@794 -- # type=--id 00:16:15.764 19:46:57 -- common/autotest_common.sh@795 -- # id=0 00:16:15.764 19:46:57 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:15.764 19:46:57 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:15.764 19:46:57 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:15.764 19:46:57 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:15.764 19:46:57 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:15.764 19:46:57 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:15.764 nvmf_trace.0 00:16:15.764 19:46:57 -- common/autotest_common.sh@809 -- # return 0 00:16:15.764 19:46:57 -- fips/fips.sh@16 -- # killprocess 1710111 00:16:15.764 19:46:57 -- common/autotest_common.sh@936 -- # '[' -z 1710111 ']' 00:16:15.764 19:46:57 -- common/autotest_common.sh@940 -- # kill -0 1710111 00:16:15.764 19:46:57 -- common/autotest_common.sh@941 -- # uname 00:16:15.764 19:46:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:15.764 19:46:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1710111 00:16:16.025 19:46:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:16.025 19:46:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:16.025 19:46:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1710111' 00:16:16.025 killing process with pid 1710111 00:16:16.025 19:46:57 -- common/autotest_common.sh@955 -- # kill 1710111 00:16:16.025 Received shutdown signal, test time was about 10.000000 seconds 00:16:16.025 00:16:16.025 Latency(us) 00:16:16.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.025 =================================================================================================================== 00:16:16.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.025 [2024-04-24 19:46:57.300131] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:16.025 19:46:57 -- common/autotest_common.sh@960 -- # wait 1710111 00:16:16.285 19:46:57 -- fips/fips.sh@17 -- # nvmftestfini 00:16:16.285 19:46:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:16.285 19:46:57 -- nvmf/common.sh@117 -- # sync 00:16:16.285 19:46:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.285 19:46:57 -- nvmf/common.sh@120 -- # set +e 00:16:16.285 19:46:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.285 19:46:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.285 rmmod nvme_tcp 00:16:16.285 rmmod nvme_fabrics 00:16:16.285 rmmod nvme_keyring 00:16:16.285 19:46:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.285 19:46:57 -- nvmf/common.sh@124 -- # set -e 00:16:16.285 19:46:57 -- nvmf/common.sh@125 -- # return 0 00:16:16.285 19:46:57 -- nvmf/common.sh@478 -- # '[' -n 1709951 ']' 00:16:16.285 19:46:57 -- nvmf/common.sh@479 -- # killprocess 1709951 00:16:16.285 19:46:57 -- common/autotest_common.sh@936 -- # '[' -z 1709951 ']' 00:16:16.285 19:46:57 -- common/autotest_common.sh@940 -- # kill -0 1709951 00:16:16.285 19:46:57 -- common/autotest_common.sh@941 -- # uname 00:16:16.285 19:46:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.285 19:46:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1709951 00:16:16.285 19:46:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:16.285 19:46:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:16.285 19:46:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1709951' 00:16:16.285 killing process with pid 1709951 00:16:16.285 19:46:57 -- common/autotest_common.sh@955 -- # kill 1709951 00:16:16.285 [2024-04-24 19:46:57.657475] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:16.285 19:46:57 -- common/autotest_common.sh@960 -- # wait 1709951 00:16:16.545 19:46:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:16.545 19:46:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:16.545 19:46:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:16.545 19:46:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.545 19:46:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.545 19:46:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.545 19:46:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.545 19:46:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.085 19:46:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:19.085 19:46:59 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:16:19.085 00:16:19.085 real 0m18.001s 00:16:19.085 user 0m22.893s 00:16:19.085 sys 0m6.715s 00:16:19.085 19:46:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.085 19:46:59 -- common/autotest_common.sh@10 -- # set +x 00:16:19.085 ************************************ 00:16:19.085 END TEST nvmf_fips 00:16:19.085 ************************************ 00:16:19.085 19:47:00 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:16:19.085 19:47:00 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:16:19.085 19:47:00 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:16:19.085 19:47:00 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:16:19.085 19:47:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.085 19:47:00 -- common/autotest_common.sh@10 -- # set +x 00:16:20.991 19:47:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:20.991 19:47:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:20.991 19:47:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:20.991 19:47:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:20.991 19:47:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:20.991 19:47:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:20.991 19:47:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:20.991 19:47:01 -- nvmf/common.sh@295 -- # net_devs=() 00:16:20.991 19:47:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:20.991 19:47:01 -- nvmf/common.sh@296 -- # e810=() 00:16:20.991 19:47:01 -- nvmf/common.sh@296 -- # local -ga e810 00:16:20.991 19:47:01 -- nvmf/common.sh@297 -- # x722=() 00:16:20.991 19:47:01 -- nvmf/common.sh@297 -- # local -ga x722 00:16:20.991 19:47:01 -- nvmf/common.sh@298 -- # mlx=() 00:16:20.991 19:47:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:20.991 19:47:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.991 19:47:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:20.991 19:47:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:20.991 19:47:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:20.991 19:47:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.991 19:47:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:20.991 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:20.991 19:47:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:20.991 19:47:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:20.991 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:20.991 19:47:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:20.991 19:47:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:20.991 19:47:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.991 19:47:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.991 19:47:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:20.991 19:47:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.991 19:47:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:20.991 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:20.991 19:47:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.991 19:47:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:20.991 19:47:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.991 19:47:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:20.991 19:47:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.991 19:47:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:20.991 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:20.991 19:47:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.991 19:47:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:20.991 19:47:01 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.991 19:47:01 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:16:20.991 19:47:01 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:20.991 19:47:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:20.991 19:47:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:20.991 19:47:01 -- common/autotest_common.sh@10 -- # set +x 00:16:20.991 ************************************ 00:16:20.991 START TEST nvmf_perf_adq 00:16:20.991 ************************************ 00:16:20.991 19:47:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:16:20.991 * Looking for test storage... 00:16:20.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.991 19:47:02 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.991 19:47:02 -- nvmf/common.sh@7 -- # uname -s 00:16:20.991 19:47:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.991 19:47:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.991 19:47:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.991 19:47:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.991 19:47:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.991 19:47:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.991 19:47:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.991 19:47:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.991 19:47:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.991 19:47:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.991 19:47:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.991 19:47:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.991 19:47:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.991 19:47:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.991 19:47:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.991 19:47:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.991 19:47:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.991 19:47:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.991 19:47:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.991 19:47:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.991 19:47:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.991 19:47:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.991 19:47:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.991 19:47:02 -- paths/export.sh@5 -- # export PATH 00:16:20.991 19:47:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.991 19:47:02 -- nvmf/common.sh@47 -- # : 0 00:16:20.991 19:47:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:20.991 19:47:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:20.991 19:47:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.991 19:47:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.991 19:47:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.991 19:47:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:20.991 19:47:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:20.991 19:47:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:20.991 19:47:02 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:16:20.991 19:47:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:20.991 19:47:02 -- common/autotest_common.sh@10 -- # set +x 00:16:22.894 19:47:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:22.894 19:47:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.894 19:47:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.894 19:47:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.894 19:47:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.894 19:47:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.894 19:47:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.894 19:47:04 -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.894 19:47:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.894 19:47:04 -- nvmf/common.sh@296 -- # e810=() 00:16:22.894 19:47:04 -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.894 19:47:04 -- nvmf/common.sh@297 -- # x722=() 00:16:22.895 19:47:04 -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.895 19:47:04 -- nvmf/common.sh@298 -- # mlx=() 00:16:22.895 19:47:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.895 19:47:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.895 19:47:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.895 19:47:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.895 19:47:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.895 19:47:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.895 19:47:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:22.895 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:22.895 19:47:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.895 19:47:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:22.895 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:22.895 19:47:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.895 19:47:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.895 19:47:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.895 19:47:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.895 19:47:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:22.895 19:47:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.895 19:47:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:22.895 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:22.895 19:47:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.895 19:47:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.895 19:47:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.895 19:47:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:22.895 19:47:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.895 19:47:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:22.895 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:22.895 19:47:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.895 19:47:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:22.895 19:47:04 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.895 19:47:04 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:16:22.895 19:47:04 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:16:22.895 19:47:04 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:16:22.895 19:47:04 -- target/perf_adq.sh@52 -- # rmmod ice 00:16:23.463 19:47:04 -- target/perf_adq.sh@53 -- # modprobe ice 00:16:25.373 19:47:06 -- target/perf_adq.sh@54 -- # sleep 5 00:16:30.643 19:47:11 -- target/perf_adq.sh@67 -- # nvmftestinit 00:16:30.643 19:47:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:30.643 19:47:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.643 19:47:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:30.643 19:47:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:30.643 19:47:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:30.643 19:47:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.643 19:47:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.643 19:47:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.643 19:47:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:30.643 19:47:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:30.643 19:47:11 -- common/autotest_common.sh@10 -- # set +x 00:16:30.643 19:47:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:30.643 19:47:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.643 19:47:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.643 19:47:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.643 19:47:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.643 19:47:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.643 19:47:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.643 19:47:11 -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.643 19:47:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.643 19:47:11 -- nvmf/common.sh@296 -- # e810=() 00:16:30.643 19:47:11 -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.643 19:47:11 -- nvmf/common.sh@297 -- # x722=() 00:16:30.643 19:47:11 -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.643 19:47:11 -- nvmf/common.sh@298 -- # mlx=() 00:16:30.643 19:47:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.643 19:47:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.643 19:47:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.643 19:47:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.643 19:47:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.643 19:47:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.643 19:47:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:30.643 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:30.643 19:47:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.643 19:47:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:30.643 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:30.643 19:47:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.643 19:47:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.643 19:47:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.643 19:47:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:30.643 19:47:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.643 19:47:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:30.643 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:30.643 19:47:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.643 19:47:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.643 19:47:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.643 19:47:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:30.643 19:47:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.643 19:47:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:30.643 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:30.643 19:47:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.643 19:47:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:30.643 19:47:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:30.643 19:47:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:30.643 19:47:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:30.643 19:47:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.643 19:47:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.643 19:47:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.643 19:47:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.644 19:47:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.644 19:47:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.644 19:47:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.644 19:47:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.644 19:47:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.644 19:47:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.644 19:47:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.644 19:47:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.644 19:47:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.644 19:47:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.644 19:47:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.644 19:47:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.644 19:47:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.644 19:47:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.644 19:47:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.644 19:47:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:16:30.644 00:16:30.644 --- 10.0.0.2 ping statistics --- 00:16:30.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.644 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:16:30.644 19:47:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:16:30.644 00:16:30.644 --- 10.0.0.1 ping statistics --- 00:16:30.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.644 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:30.644 19:47:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.644 19:47:11 -- nvmf/common.sh@411 -- # return 0 00:16:30.644 19:47:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:30.644 19:47:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.644 19:47:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:30.644 19:47:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:30.644 19:47:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.644 19:47:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:30.644 19:47:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:30.644 19:47:11 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:30.644 19:47:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:30.644 19:47:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:30.644 19:47:11 -- common/autotest_common.sh@10 -- # set +x 00:16:30.644 19:47:11 -- nvmf/common.sh@470 -- # nvmfpid=1715994 00:16:30.644 19:47:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:30.644 19:47:11 -- nvmf/common.sh@471 -- # waitforlisten 1715994 00:16:30.644 19:47:11 -- common/autotest_common.sh@817 -- # '[' -z 1715994 ']' 00:16:30.644 19:47:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.644 19:47:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:30.644 19:47:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.644 19:47:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:30.644 19:47:11 -- common/autotest_common.sh@10 -- # set +x 00:16:30.644 [2024-04-24 19:47:11.937285] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:16:30.644 [2024-04-24 19:47:11.937355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.644 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.644 [2024-04-24 19:47:12.001016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.644 [2024-04-24 19:47:12.108381] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.644 [2024-04-24 19:47:12.108447] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.644 [2024-04-24 19:47:12.108470] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.644 [2024-04-24 19:47:12.108481] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.644 [2024-04-24 19:47:12.108490] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.644 [2024-04-24 19:47:12.108581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.644 [2024-04-24 19:47:12.108653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.644 [2024-04-24 19:47:12.108711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.644 [2024-04-24 19:47:12.108714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.644 19:47:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:30.644 19:47:12 -- common/autotest_common.sh@850 -- # return 0 00:16:30.644 19:47:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:30.644 19:47:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:30.644 19:47:12 -- common/autotest_common.sh@10 -- # set +x 00:16:30.915 19:47:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.915 19:47:12 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:16:30.915 19:47:12 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:16:30.915 19:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.915 19:47:12 -- common/autotest_common.sh@10 -- # set +x 00:16:30.915 19:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.915 19:47:12 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:16:30.915 19:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.915 19:47:12 -- common/autotest_common.sh@10 -- # set +x 00:16:30.915 19:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.915 19:47:12 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:16:30.915 19:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.915 19:47:12 -- common/autotest_common.sh@10 -- # set +x 00:16:30.915 [2024-04-24 19:47:12.275154] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.915 19:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.915 19:47:12 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:30.915 19:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.915 19:47:12 -- common/autotest_common.sh@10 -- # set +x 00:16:30.915 Malloc1 00:16:30.915 19:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.915 19:47:12 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:30.915 19:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.915 19:47:12 -- common/autotest_common.sh@10 -- # set +x 00:16:30.915 19:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.915 19:47:12 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:30.915 19:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.915 19:47:12 -- common/autotest_common.sh@10 -- # set +x 00:16:30.915 19:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.915 19:47:12 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.915 19:47:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.915 19:47:12 -- common/autotest_common.sh@10 -- # set +x 00:16:30.915 [2024-04-24 19:47:12.327212] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.915 19:47:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.915 19:47:12 -- target/perf_adq.sh@73 -- # perfpid=1716035 00:16:30.915 19:47:12 -- target/perf_adq.sh@74 -- # sleep 2 00:16:30.915 19:47:12 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:30.915 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.447 19:47:14 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:16:33.447 19:47:14 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:16:33.447 19:47:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.447 19:47:14 -- target/perf_adq.sh@76 -- # wc -l 00:16:33.447 19:47:14 -- common/autotest_common.sh@10 -- # set +x 00:16:33.447 19:47:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.447 19:47:14 -- target/perf_adq.sh@76 -- # count=4 00:16:33.447 19:47:14 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:16:33.447 19:47:14 -- target/perf_adq.sh@81 -- # wait 1716035 00:16:41.559 Initializing NVMe Controllers 00:16:41.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:41.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:41.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:41.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:41.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:41.559 Initialization complete. Launching workers. 00:16:41.559 ======================================================== 00:16:41.559 Latency(us) 00:16:41.559 Device Information : IOPS MiB/s Average min max 00:16:41.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9596.90 37.49 6669.01 3023.50 9791.45 00:16:41.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10111.40 39.50 6329.02 2255.66 8973.97 00:16:41.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9982.30 38.99 6411.72 2295.22 9447.78 00:16:41.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10214.70 39.90 6266.30 2258.97 8924.28 00:16:41.559 ======================================================== 00:16:41.559 Total : 39905.30 155.88 6415.42 2255.66 9791.45 00:16:41.559 00:16:41.559 19:47:22 -- target/perf_adq.sh@82 -- # nvmftestfini 00:16:41.559 19:47:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:41.559 19:47:22 -- nvmf/common.sh@117 -- # sync 00:16:41.559 19:47:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.559 19:47:22 -- nvmf/common.sh@120 -- # set +e 00:16:41.559 19:47:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.559 19:47:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.559 rmmod nvme_tcp 00:16:41.559 rmmod nvme_fabrics 00:16:41.559 rmmod nvme_keyring 00:16:41.559 19:47:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.559 19:47:22 -- nvmf/common.sh@124 -- # set -e 00:16:41.559 19:47:22 -- nvmf/common.sh@125 -- # return 0 00:16:41.559 19:47:22 -- nvmf/common.sh@478 -- # '[' -n 1715994 ']' 00:16:41.559 19:47:22 -- nvmf/common.sh@479 -- # killprocess 1715994 00:16:41.559 19:47:22 -- common/autotest_common.sh@936 -- # '[' -z 1715994 ']' 00:16:41.559 19:47:22 -- common/autotest_common.sh@940 -- # kill -0 1715994 00:16:41.559 19:47:22 -- common/autotest_common.sh@941 -- # uname 00:16:41.559 19:47:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:41.559 19:47:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1715994 00:16:41.559 19:47:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:41.559 19:47:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:41.559 19:47:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1715994' 00:16:41.559 killing process with pid 1715994 00:16:41.559 19:47:22 -- common/autotest_common.sh@955 -- # kill 1715994 00:16:41.559 19:47:22 -- common/autotest_common.sh@960 -- # wait 1715994 00:16:41.559 19:47:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:41.559 19:47:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:41.559 19:47:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:41.559 19:47:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.559 19:47:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.559 19:47:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.559 19:47:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.559 19:47:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.467 19:47:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:43.467 19:47:24 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:16:43.467 19:47:24 -- target/perf_adq.sh@52 -- # rmmod ice 00:16:44.033 19:47:25 -- target/perf_adq.sh@53 -- # modprobe ice 00:16:46.567 19:47:27 -- target/perf_adq.sh@54 -- # sleep 5 00:16:51.859 19:47:32 -- target/perf_adq.sh@87 -- # nvmftestinit 00:16:51.859 19:47:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:51.859 19:47:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.859 19:47:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:51.859 19:47:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:51.859 19:47:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:51.859 19:47:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.859 19:47:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.859 19:47:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.859 19:47:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:51.859 19:47:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:51.859 19:47:32 -- common/autotest_common.sh@10 -- # set +x 00:16:51.859 19:47:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:51.859 19:47:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.859 19:47:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.859 19:47:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.859 19:47:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.859 19:47:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.859 19:47:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.859 19:47:32 -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.859 19:47:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.859 19:47:32 -- nvmf/common.sh@296 -- # e810=() 00:16:51.859 19:47:32 -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.859 19:47:32 -- nvmf/common.sh@297 -- # x722=() 00:16:51.859 19:47:32 -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.859 19:47:32 -- nvmf/common.sh@298 -- # mlx=() 00:16:51.859 19:47:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.859 19:47:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.859 19:47:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.859 19:47:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.859 19:47:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.859 19:47:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.859 19:47:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:51.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:51.859 19:47:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.859 19:47:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:51.859 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:51.859 19:47:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.859 19:47:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.859 19:47:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.859 19:47:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:51.859 19:47:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.859 19:47:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:51.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:51.859 19:47:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.859 19:47:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.859 19:47:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.859 19:47:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:51.859 19:47:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.859 19:47:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:51.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:51.859 19:47:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.859 19:47:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:51.859 19:47:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:51.859 19:47:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:51.859 19:47:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:51.860 19:47:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:51.860 19:47:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.860 19:47:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.860 19:47:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.860 19:47:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.860 19:47:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.860 19:47:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.860 19:47:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.860 19:47:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.860 19:47:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.860 19:47:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.860 19:47:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.860 19:47:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.860 19:47:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.860 19:47:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.860 19:47:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.860 19:47:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.860 19:47:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.860 19:47:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.860 19:47:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.860 19:47:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:16:51.860 00:16:51.860 --- 10.0.0.2 ping statistics --- 00:16:51.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.860 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:16:51.860 19:47:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:16:51.860 00:16:51.860 --- 10.0.0.1 ping statistics --- 00:16:51.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.860 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:16:51.860 19:47:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.860 19:47:32 -- nvmf/common.sh@411 -- # return 0 00:16:51.860 19:47:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:51.860 19:47:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.860 19:47:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:51.860 19:47:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:51.860 19:47:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.860 19:47:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:51.860 19:47:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:51.860 19:47:32 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:16:51.860 19:47:32 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:16:51.860 19:47:32 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:16:51.860 19:47:32 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:16:51.860 net.core.busy_poll = 1 00:16:51.860 19:47:32 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:16:51.860 net.core.busy_read = 1 00:16:51.860 19:47:32 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:16:51.860 19:47:32 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:16:51.860 19:47:32 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:16:51.860 19:47:32 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:16:51.860 19:47:32 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:16:51.860 19:47:32 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:51.860 19:47:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:51.860 19:47:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:51.860 19:47:32 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 19:47:32 -- nvmf/common.sh@470 -- # nvmfpid=1718671 00:16:51.860 19:47:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:51.860 19:47:32 -- nvmf/common.sh@471 -- # waitforlisten 1718671 00:16:51.860 19:47:32 -- common/autotest_common.sh@817 -- # '[' -z 1718671 ']' 00:16:51.860 19:47:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.860 19:47:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.860 19:47:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.860 19:47:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.860 19:47:32 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 [2024-04-24 19:47:32.800838] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:16:51.860 [2024-04-24 19:47:32.800931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.860 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.860 [2024-04-24 19:47:32.865981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.860 [2024-04-24 19:47:32.970978] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.860 [2024-04-24 19:47:32.971033] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.860 [2024-04-24 19:47:32.971056] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.860 [2024-04-24 19:47:32.971067] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.860 [2024-04-24 19:47:32.971082] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.860 [2024-04-24 19:47:32.971135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.860 [2024-04-24 19:47:32.971195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.860 [2024-04-24 19:47:32.971260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.860 [2024-04-24 19:47:32.971262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.860 19:47:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.860 19:47:33 -- common/autotest_common.sh@850 -- # return 0 00:16:51.860 19:47:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:51.860 19:47:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:51.860 19:47:33 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 19:47:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.860 19:47:33 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:16:51.860 19:47:33 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:16:51.860 19:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.860 19:47:33 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 19:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.860 19:47:33 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:16:51.860 19:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.860 19:47:33 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 19:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.860 19:47:33 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:16:51.860 19:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.860 19:47:33 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 [2024-04-24 19:47:33.157586] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.860 19:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.860 19:47:33 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:51.860 19:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.860 19:47:33 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 Malloc1 00:16:51.860 19:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.860 19:47:33 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:51.860 19:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.860 19:47:33 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 19:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.860 19:47:33 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:51.860 19:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.860 19:47:33 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 19:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.860 19:47:33 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.860 19:47:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.860 19:47:33 -- common/autotest_common.sh@10 -- # set +x 00:16:51.860 [2024-04-24 19:47:33.210728] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.860 19:47:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.860 19:47:33 -- target/perf_adq.sh@94 -- # perfpid=1718794 00:16:51.860 19:47:33 -- target/perf_adq.sh@95 -- # sleep 2 00:16:51.860 19:47:33 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:51.860 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.761 19:47:35 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:16:53.761 19:47:35 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:16:53.761 19:47:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.761 19:47:35 -- target/perf_adq.sh@97 -- # wc -l 00:16:53.761 19:47:35 -- common/autotest_common.sh@10 -- # set +x 00:16:53.761 19:47:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.761 19:47:35 -- target/perf_adq.sh@97 -- # count=2 00:16:53.761 19:47:35 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:16:53.762 19:47:35 -- target/perf_adq.sh@103 -- # wait 1718794 00:17:03.735 Initializing NVMe Controllers 00:17:03.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:03.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:17:03.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:17:03.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:17:03.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:17:03.735 Initialization complete. Launching workers. 00:17:03.735 ======================================================== 00:17:03.735 Latency(us) 00:17:03.735 Device Information : IOPS MiB/s Average min max 00:17:03.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3956.20 15.45 16181.22 2733.95 62257.64 00:17:03.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12985.80 50.73 4928.48 1278.16 7925.52 00:17:03.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4100.80 16.02 15656.20 2909.66 61168.46 00:17:03.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4127.50 16.12 15557.99 1922.86 61019.07 00:17:03.735 ======================================================== 00:17:03.735 Total : 25170.30 98.32 10188.00 1278.16 62257.64 00:17:03.735 00:17:03.735 19:47:43 -- target/perf_adq.sh@104 -- # nvmftestfini 00:17:03.735 19:47:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:03.735 19:47:43 -- nvmf/common.sh@117 -- # sync 00:17:03.735 19:47:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.735 19:47:43 -- nvmf/common.sh@120 -- # set +e 00:17:03.735 19:47:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.735 19:47:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.735 rmmod nvme_tcp 00:17:03.735 rmmod nvme_fabrics 00:17:03.735 rmmod nvme_keyring 00:17:03.735 19:47:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.735 19:47:43 -- nvmf/common.sh@124 -- # set -e 00:17:03.735 19:47:43 -- nvmf/common.sh@125 -- # return 0 00:17:03.735 19:47:43 -- nvmf/common.sh@478 -- # '[' -n 1718671 ']' 00:17:03.735 19:47:43 -- nvmf/common.sh@479 -- # killprocess 1718671 00:17:03.735 19:47:43 -- common/autotest_common.sh@936 -- # '[' -z 1718671 ']' 00:17:03.735 19:47:43 -- common/autotest_common.sh@940 -- # kill -0 1718671 00:17:03.735 19:47:43 -- common/autotest_common.sh@941 -- # uname 00:17:03.735 19:47:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.735 19:47:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1718671 00:17:03.735 19:47:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:03.735 19:47:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:03.735 19:47:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1718671' 00:17:03.735 killing process with pid 1718671 00:17:03.735 19:47:43 -- common/autotest_common.sh@955 -- # kill 1718671 00:17:03.735 19:47:43 -- common/autotest_common.sh@960 -- # wait 1718671 00:17:03.735 19:47:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:03.735 19:47:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:03.735 19:47:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:03.735 19:47:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.735 19:47:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.735 19:47:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.735 19:47:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.735 19:47:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.669 19:47:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:04.669 19:47:45 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:17:04.669 00:17:04.669 real 0m43.729s 00:17:04.669 user 2m35.980s 00:17:04.669 sys 0m10.763s 00:17:04.669 19:47:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:04.669 19:47:45 -- common/autotest_common.sh@10 -- # set +x 00:17:04.669 ************************************ 00:17:04.669 END TEST nvmf_perf_adq 00:17:04.669 ************************************ 00:17:04.669 19:47:45 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:17:04.669 19:47:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:04.669 19:47:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:04.669 19:47:45 -- common/autotest_common.sh@10 -- # set +x 00:17:04.669 ************************************ 00:17:04.669 START TEST nvmf_shutdown 00:17:04.669 ************************************ 00:17:04.669 19:47:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:17:04.669 * Looking for test storage... 00:17:04.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:04.669 19:47:46 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.669 19:47:46 -- nvmf/common.sh@7 -- # uname -s 00:17:04.669 19:47:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.669 19:47:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.669 19:47:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.669 19:47:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.669 19:47:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.669 19:47:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.669 19:47:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.669 19:47:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.669 19:47:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.669 19:47:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.669 19:47:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.669 19:47:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.669 19:47:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.669 19:47:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.669 19:47:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.669 19:47:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.669 19:47:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.669 19:47:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.669 19:47:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.669 19:47:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.669 19:47:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.669 19:47:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.669 19:47:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.669 19:47:46 -- paths/export.sh@5 -- # export PATH 00:17:04.669 19:47:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.669 19:47:46 -- nvmf/common.sh@47 -- # : 0 00:17:04.669 19:47:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.669 19:47:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.669 19:47:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.669 19:47:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.669 19:47:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.669 19:47:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.669 19:47:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.669 19:47:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.669 19:47:46 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:04.669 19:47:46 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:04.669 19:47:46 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:17:04.669 19:47:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:04.669 19:47:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:04.669 19:47:46 -- common/autotest_common.sh@10 -- # set +x 00:17:04.669 ************************************ 00:17:04.669 START TEST nvmf_shutdown_tc1 00:17:04.669 ************************************ 00:17:04.669 19:47:46 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:17:04.669 19:47:46 -- target/shutdown.sh@74 -- # starttarget 00:17:04.669 19:47:46 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:04.669 19:47:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:04.670 19:47:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.670 19:47:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:04.670 19:47:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:04.670 19:47:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:04.670 19:47:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.670 19:47:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.670 19:47:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.670 19:47:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:04.670 19:47:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:04.670 19:47:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:04.670 19:47:46 -- common/autotest_common.sh@10 -- # set +x 00:17:06.598 19:47:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:06.598 19:47:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:06.598 19:47:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:06.598 19:47:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:06.598 19:47:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:06.598 19:47:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:06.598 19:47:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:06.598 19:47:47 -- nvmf/common.sh@295 -- # net_devs=() 00:17:06.598 19:47:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:06.598 19:47:47 -- nvmf/common.sh@296 -- # e810=() 00:17:06.598 19:47:47 -- nvmf/common.sh@296 -- # local -ga e810 00:17:06.598 19:47:47 -- nvmf/common.sh@297 -- # x722=() 00:17:06.598 19:47:47 -- nvmf/common.sh@297 -- # local -ga x722 00:17:06.598 19:47:47 -- nvmf/common.sh@298 -- # mlx=() 00:17:06.598 19:47:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:06.598 19:47:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.598 19:47:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:06.598 19:47:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:06.598 19:47:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:06.598 19:47:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:06.598 19:47:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:06.598 19:47:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:06.598 19:47:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.598 19:47:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:06.598 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:06.598 19:47:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.598 19:47:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.598 19:47:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.598 19:47:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:06.598 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:06.598 19:47:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:06.598 19:47:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:06.598 19:47:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.598 19:47:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.598 19:47:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:06.598 19:47:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.598 19:47:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:06.598 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:06.598 19:47:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.598 19:47:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.598 19:47:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.598 19:47:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:06.599 19:47:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.599 19:47:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:06.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:06.599 19:47:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.599 19:47:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:06.599 19:47:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:06.599 19:47:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:06.599 19:47:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:06.599 19:47:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:06.599 19:47:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.599 19:47:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.599 19:47:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.599 19:47:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:06.599 19:47:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.599 19:47:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.599 19:47:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:06.599 19:47:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.599 19:47:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.599 19:47:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:06.599 19:47:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:06.599 19:47:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.599 19:47:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.599 19:47:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.599 19:47:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.599 19:47:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:06.599 19:47:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.858 19:47:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.858 19:47:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.858 19:47:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:06.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:17:06.858 00:17:06.858 --- 10.0.0.2 ping statistics --- 00:17:06.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.858 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:17:06.858 19:47:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:17:06.858 00:17:06.858 --- 10.0.0.1 ping statistics --- 00:17:06.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.858 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:17:06.858 19:47:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.858 19:47:48 -- nvmf/common.sh@411 -- # return 0 00:17:06.858 19:47:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:06.858 19:47:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.858 19:47:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:06.858 19:47:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:06.858 19:47:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.858 19:47:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:06.858 19:47:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:06.858 19:47:48 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:06.859 19:47:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:06.859 19:47:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:06.859 19:47:48 -- common/autotest_common.sh@10 -- # set +x 00:17:06.859 19:47:48 -- nvmf/common.sh@470 -- # nvmfpid=1721970 00:17:06.859 19:47:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:06.859 19:47:48 -- nvmf/common.sh@471 -- # waitforlisten 1721970 00:17:06.859 19:47:48 -- common/autotest_common.sh@817 -- # '[' -z 1721970 ']' 00:17:06.859 19:47:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.859 19:47:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:06.859 19:47:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.859 19:47:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:06.859 19:47:48 -- common/autotest_common.sh@10 -- # set +x 00:17:06.859 [2024-04-24 19:47:48.220753] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:06.859 [2024-04-24 19:47:48.220835] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.859 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.859 [2024-04-24 19:47:48.284660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.117 [2024-04-24 19:47:48.396554] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.117 [2024-04-24 19:47:48.396612] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.117 [2024-04-24 19:47:48.396647] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:07.117 [2024-04-24 19:47:48.396659] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:07.117 [2024-04-24 19:47:48.396669] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.117 [2024-04-24 19:47:48.396798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.117 [2024-04-24 19:47:48.397105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.117 [2024-04-24 19:47:48.397166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:07.117 [2024-04-24 19:47:48.397170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.117 19:47:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:07.117 19:47:48 -- common/autotest_common.sh@850 -- # return 0 00:17:07.117 19:47:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:07.117 19:47:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:07.117 19:47:48 -- common/autotest_common.sh@10 -- # set +x 00:17:07.117 19:47:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.117 19:47:48 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.117 19:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.117 19:47:48 -- common/autotest_common.sh@10 -- # set +x 00:17:07.117 [2024-04-24 19:47:48.547178] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.117 19:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.117 19:47:48 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:07.117 19:47:48 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:07.117 19:47:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:07.117 19:47:48 -- common/autotest_common.sh@10 -- # set +x 00:17:07.117 19:47:48 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:07.117 19:47:48 -- target/shutdown.sh@28 -- # cat 00:17:07.117 19:47:48 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:07.117 19:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.117 19:47:48 -- common/autotest_common.sh@10 -- # set +x 00:17:07.117 Malloc1 00:17:07.117 [2024-04-24 19:47:48.623115] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.375 Malloc2 00:17:07.375 Malloc3 00:17:07.375 Malloc4 00:17:07.375 Malloc5 00:17:07.375 Malloc6 00:17:07.375 Malloc7 00:17:07.634 Malloc8 00:17:07.634 Malloc9 00:17:07.634 Malloc10 00:17:07.634 19:47:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.634 19:47:49 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:07.634 19:47:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:07.634 19:47:49 -- common/autotest_common.sh@10 -- # set +x 00:17:07.634 19:47:49 -- target/shutdown.sh@78 -- # perfpid=1722151 00:17:07.634 19:47:49 -- target/shutdown.sh@79 -- # waitforlisten 1722151 /var/tmp/bdevperf.sock 00:17:07.634 19:47:49 -- common/autotest_common.sh@817 -- # '[' -z 1722151 ']' 00:17:07.634 19:47:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.634 19:47:49 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:17:07.634 19:47:49 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:07.634 19:47:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:07.634 19:47:49 -- nvmf/common.sh@521 -- # config=() 00:17:07.634 19:47:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.634 19:47:49 -- nvmf/common.sh@521 -- # local subsystem config 00:17:07.634 19:47:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:07.634 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.634 19:47:49 -- common/autotest_common.sh@10 -- # set +x 00:17:07.634 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.634 { 00:17:07.634 "params": { 00:17:07.634 "name": "Nvme$subsystem", 00:17:07.634 "trtype": "$TEST_TRANSPORT", 00:17:07.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.634 "adrfam": "ipv4", 00:17:07.634 "trsvcid": "$NVMF_PORT", 00:17:07.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.634 "hdgst": ${hdgst:-false}, 00:17:07.634 "ddgst": ${ddgst:-false} 00:17:07.634 }, 00:17:07.634 "method": "bdev_nvme_attach_controller" 00:17:07.634 } 00:17:07.634 EOF 00:17:07.634 )") 00:17:07.634 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.634 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.634 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.634 { 00:17:07.634 "params": { 00:17:07.634 "name": "Nvme$subsystem", 00:17:07.634 "trtype": "$TEST_TRANSPORT", 00:17:07.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "$NVMF_PORT", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.635 "hdgst": ${hdgst:-false}, 00:17:07.635 "ddgst": ${ddgst:-false} 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 } 00:17:07.635 EOF 00:17:07.635 )") 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.635 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.635 { 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme$subsystem", 00:17:07.635 "trtype": "$TEST_TRANSPORT", 00:17:07.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "$NVMF_PORT", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.635 "hdgst": ${hdgst:-false}, 00:17:07.635 "ddgst": ${ddgst:-false} 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 } 00:17:07.635 EOF 00:17:07.635 )") 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.635 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.635 { 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme$subsystem", 00:17:07.635 "trtype": "$TEST_TRANSPORT", 00:17:07.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "$NVMF_PORT", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.635 "hdgst": ${hdgst:-false}, 00:17:07.635 "ddgst": ${ddgst:-false} 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 } 00:17:07.635 EOF 00:17:07.635 )") 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.635 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.635 { 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme$subsystem", 00:17:07.635 "trtype": "$TEST_TRANSPORT", 00:17:07.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "$NVMF_PORT", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.635 "hdgst": ${hdgst:-false}, 00:17:07.635 "ddgst": ${ddgst:-false} 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 } 00:17:07.635 EOF 00:17:07.635 )") 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.635 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.635 { 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme$subsystem", 00:17:07.635 "trtype": "$TEST_TRANSPORT", 00:17:07.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "$NVMF_PORT", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.635 "hdgst": ${hdgst:-false}, 00:17:07.635 "ddgst": ${ddgst:-false} 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 } 00:17:07.635 EOF 00:17:07.635 )") 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.635 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.635 { 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme$subsystem", 00:17:07.635 "trtype": "$TEST_TRANSPORT", 00:17:07.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "$NVMF_PORT", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.635 "hdgst": ${hdgst:-false}, 00:17:07.635 "ddgst": ${ddgst:-false} 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 } 00:17:07.635 EOF 00:17:07.635 )") 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.635 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.635 { 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme$subsystem", 00:17:07.635 "trtype": "$TEST_TRANSPORT", 00:17:07.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "$NVMF_PORT", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.635 "hdgst": ${hdgst:-false}, 00:17:07.635 "ddgst": ${ddgst:-false} 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 } 00:17:07.635 EOF 00:17:07.635 )") 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.635 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.635 { 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme$subsystem", 00:17:07.635 "trtype": "$TEST_TRANSPORT", 00:17:07.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "$NVMF_PORT", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.635 "hdgst": ${hdgst:-false}, 00:17:07.635 "ddgst": ${ddgst:-false} 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 } 00:17:07.635 EOF 00:17:07.635 )") 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.635 19:47:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.635 { 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme$subsystem", 00:17:07.635 "trtype": "$TEST_TRANSPORT", 00:17:07.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "$NVMF_PORT", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.635 "hdgst": ${hdgst:-false}, 00:17:07.635 "ddgst": ${ddgst:-false} 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 } 00:17:07.635 EOF 00:17:07.635 )") 00:17:07.635 19:47:49 -- nvmf/common.sh@543 -- # cat 00:17:07.635 19:47:49 -- nvmf/common.sh@545 -- # jq . 00:17:07.635 19:47:49 -- nvmf/common.sh@546 -- # IFS=, 00:17:07.635 19:47:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme1", 00:17:07.635 "trtype": "tcp", 00:17:07.635 "traddr": "10.0.0.2", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "4420", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.635 "hdgst": false, 00:17:07.635 "ddgst": false 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 },{ 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme2", 00:17:07.635 "trtype": "tcp", 00:17:07.635 "traddr": "10.0.0.2", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "4420", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:07.635 "hdgst": false, 00:17:07.635 "ddgst": false 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 },{ 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme3", 00:17:07.635 "trtype": "tcp", 00:17:07.635 "traddr": "10.0.0.2", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "4420", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:07.635 "hdgst": false, 00:17:07.635 "ddgst": false 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 },{ 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme4", 00:17:07.635 "trtype": "tcp", 00:17:07.635 "traddr": "10.0.0.2", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "4420", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:07.635 "hdgst": false, 00:17:07.635 "ddgst": false 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 },{ 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme5", 00:17:07.635 "trtype": "tcp", 00:17:07.635 "traddr": "10.0.0.2", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "4420", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:07.635 "hdgst": false, 00:17:07.635 "ddgst": false 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 },{ 00:17:07.635 "params": { 00:17:07.635 "name": "Nvme6", 00:17:07.635 "trtype": "tcp", 00:17:07.635 "traddr": "10.0.0.2", 00:17:07.635 "adrfam": "ipv4", 00:17:07.635 "trsvcid": "4420", 00:17:07.635 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:07.635 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:07.635 "hdgst": false, 00:17:07.635 "ddgst": false 00:17:07.635 }, 00:17:07.635 "method": "bdev_nvme_attach_controller" 00:17:07.635 },{ 00:17:07.635 "params": { 00:17:07.636 "name": "Nvme7", 00:17:07.636 "trtype": "tcp", 00:17:07.636 "traddr": "10.0.0.2", 00:17:07.636 "adrfam": "ipv4", 00:17:07.636 "trsvcid": "4420", 00:17:07.636 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:07.636 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:07.636 "hdgst": false, 00:17:07.636 "ddgst": false 00:17:07.636 }, 00:17:07.636 "method": "bdev_nvme_attach_controller" 00:17:07.636 },{ 00:17:07.636 "params": { 00:17:07.636 "name": "Nvme8", 00:17:07.636 "trtype": "tcp", 00:17:07.636 "traddr": "10.0.0.2", 00:17:07.636 "adrfam": "ipv4", 00:17:07.636 "trsvcid": "4420", 00:17:07.636 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:07.636 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:07.636 "hdgst": false, 00:17:07.636 "ddgst": false 00:17:07.636 }, 00:17:07.636 "method": "bdev_nvme_attach_controller" 00:17:07.636 },{ 00:17:07.636 "params": { 00:17:07.636 "name": "Nvme9", 00:17:07.636 "trtype": "tcp", 00:17:07.636 "traddr": "10.0.0.2", 00:17:07.636 "adrfam": "ipv4", 00:17:07.636 "trsvcid": "4420", 00:17:07.636 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:07.636 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:07.636 "hdgst": false, 00:17:07.636 "ddgst": false 00:17:07.636 }, 00:17:07.636 "method": "bdev_nvme_attach_controller" 00:17:07.636 },{ 00:17:07.636 "params": { 00:17:07.636 "name": "Nvme10", 00:17:07.636 "trtype": "tcp", 00:17:07.636 "traddr": "10.0.0.2", 00:17:07.636 "adrfam": "ipv4", 00:17:07.636 "trsvcid": "4420", 00:17:07.636 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:07.636 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:07.636 "hdgst": false, 00:17:07.636 "ddgst": false 00:17:07.636 }, 00:17:07.636 "method": "bdev_nvme_attach_controller" 00:17:07.636 }' 00:17:07.636 [2024-04-24 19:47:49.119213] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:07.636 [2024-04-24 19:47:49.119285] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:07.895 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.895 [2024-04-24 19:47:49.183566] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.895 [2024-04-24 19:47:49.293384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.802 19:47:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:09.802 19:47:50 -- common/autotest_common.sh@850 -- # return 0 00:17:09.802 19:47:50 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:09.802 19:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.802 19:47:50 -- common/autotest_common.sh@10 -- # set +x 00:17:09.802 19:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.802 19:47:50 -- target/shutdown.sh@83 -- # kill -9 1722151 00:17:09.802 19:47:50 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:17:09.802 19:47:50 -- target/shutdown.sh@87 -- # sleep 1 00:17:10.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1722151 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:17:10.370 19:47:51 -- target/shutdown.sh@88 -- # kill -0 1721970 00:17:10.370 19:47:51 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:10.370 19:47:51 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:10.370 19:47:51 -- nvmf/common.sh@521 -- # config=() 00:17:10.370 19:47:51 -- nvmf/common.sh@521 -- # local subsystem config 00:17:10.370 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.370 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.370 { 00:17:10.370 "params": { 00:17:10.370 "name": "Nvme$subsystem", 00:17:10.370 "trtype": "$TEST_TRANSPORT", 00:17:10.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.370 "adrfam": "ipv4", 00:17:10.370 "trsvcid": "$NVMF_PORT", 00:17:10.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.370 "hdgst": ${hdgst:-false}, 00:17:10.370 "ddgst": ${ddgst:-false} 00:17:10.370 }, 00:17:10.370 "method": "bdev_nvme_attach_controller" 00:17:10.370 } 00:17:10.370 EOF 00:17:10.370 )") 00:17:10.370 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.370 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.370 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.370 { 00:17:10.370 "params": { 00:17:10.370 "name": "Nvme$subsystem", 00:17:10.370 "trtype": "$TEST_TRANSPORT", 00:17:10.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.370 "adrfam": "ipv4", 00:17:10.370 "trsvcid": "$NVMF_PORT", 00:17:10.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.370 "hdgst": ${hdgst:-false}, 00:17:10.370 "ddgst": ${ddgst:-false} 00:17:10.370 }, 00:17:10.370 "method": "bdev_nvme_attach_controller" 00:17:10.370 } 00:17:10.370 EOF 00:17:10.370 )") 00:17:10.370 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.370 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.370 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.370 { 00:17:10.370 "params": { 00:17:10.370 "name": "Nvme$subsystem", 00:17:10.370 "trtype": "$TEST_TRANSPORT", 00:17:10.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.370 "adrfam": "ipv4", 00:17:10.370 "trsvcid": "$NVMF_PORT", 00:17:10.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.370 "hdgst": ${hdgst:-false}, 00:17:10.370 "ddgst": ${ddgst:-false} 00:17:10.370 }, 00:17:10.370 "method": "bdev_nvme_attach_controller" 00:17:10.370 } 00:17:10.370 EOF 00:17:10.370 )") 00:17:10.370 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.630 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.630 { 00:17:10.630 "params": { 00:17:10.630 "name": "Nvme$subsystem", 00:17:10.630 "trtype": "$TEST_TRANSPORT", 00:17:10.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.630 "adrfam": "ipv4", 00:17:10.630 "trsvcid": "$NVMF_PORT", 00:17:10.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.630 "hdgst": ${hdgst:-false}, 00:17:10.630 "ddgst": ${ddgst:-false} 00:17:10.630 }, 00:17:10.630 "method": "bdev_nvme_attach_controller" 00:17:10.630 } 00:17:10.630 EOF 00:17:10.630 )") 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.630 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.630 { 00:17:10.630 "params": { 00:17:10.630 "name": "Nvme$subsystem", 00:17:10.630 "trtype": "$TEST_TRANSPORT", 00:17:10.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.630 "adrfam": "ipv4", 00:17:10.630 "trsvcid": "$NVMF_PORT", 00:17:10.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.630 "hdgst": ${hdgst:-false}, 00:17:10.630 "ddgst": ${ddgst:-false} 00:17:10.630 }, 00:17:10.630 "method": "bdev_nvme_attach_controller" 00:17:10.630 } 00:17:10.630 EOF 00:17:10.630 )") 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.630 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.630 { 00:17:10.630 "params": { 00:17:10.630 "name": "Nvme$subsystem", 00:17:10.630 "trtype": "$TEST_TRANSPORT", 00:17:10.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.630 "adrfam": "ipv4", 00:17:10.630 "trsvcid": "$NVMF_PORT", 00:17:10.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.630 "hdgst": ${hdgst:-false}, 00:17:10.630 "ddgst": ${ddgst:-false} 00:17:10.630 }, 00:17:10.630 "method": "bdev_nvme_attach_controller" 00:17:10.630 } 00:17:10.630 EOF 00:17:10.630 )") 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.630 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.630 { 00:17:10.630 "params": { 00:17:10.630 "name": "Nvme$subsystem", 00:17:10.630 "trtype": "$TEST_TRANSPORT", 00:17:10.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.630 "adrfam": "ipv4", 00:17:10.630 "trsvcid": "$NVMF_PORT", 00:17:10.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.630 "hdgst": ${hdgst:-false}, 00:17:10.630 "ddgst": ${ddgst:-false} 00:17:10.630 }, 00:17:10.630 "method": "bdev_nvme_attach_controller" 00:17:10.630 } 00:17:10.630 EOF 00:17:10.630 )") 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.630 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.630 { 00:17:10.630 "params": { 00:17:10.630 "name": "Nvme$subsystem", 00:17:10.630 "trtype": "$TEST_TRANSPORT", 00:17:10.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.630 "adrfam": "ipv4", 00:17:10.630 "trsvcid": "$NVMF_PORT", 00:17:10.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.630 "hdgst": ${hdgst:-false}, 00:17:10.630 "ddgst": ${ddgst:-false} 00:17:10.630 }, 00:17:10.630 "method": "bdev_nvme_attach_controller" 00:17:10.630 } 00:17:10.630 EOF 00:17:10.630 )") 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.630 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.630 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.630 { 00:17:10.630 "params": { 00:17:10.630 "name": "Nvme$subsystem", 00:17:10.630 "trtype": "$TEST_TRANSPORT", 00:17:10.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.630 "adrfam": "ipv4", 00:17:10.630 "trsvcid": "$NVMF_PORT", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.631 "hdgst": ${hdgst:-false}, 00:17:10.631 "ddgst": ${ddgst:-false} 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 } 00:17:10.631 EOF 00:17:10.631 )") 00:17:10.631 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.631 19:47:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.631 19:47:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.631 { 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme$subsystem", 00:17:10.631 "trtype": "$TEST_TRANSPORT", 00:17:10.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "$NVMF_PORT", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.631 "hdgst": ${hdgst:-false}, 00:17:10.631 "ddgst": ${ddgst:-false} 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 } 00:17:10.631 EOF 00:17:10.631 )") 00:17:10.631 19:47:51 -- nvmf/common.sh@543 -- # cat 00:17:10.631 19:47:51 -- nvmf/common.sh@545 -- # jq . 00:17:10.631 19:47:51 -- nvmf/common.sh@546 -- # IFS=, 00:17:10.631 19:47:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme1", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 },{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme2", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 },{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme3", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 },{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme4", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 },{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme5", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 },{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme6", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 },{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme7", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 },{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme8", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 },{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme9", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 },{ 00:17:10.631 "params": { 00:17:10.631 "name": "Nvme10", 00:17:10.631 "trtype": "tcp", 00:17:10.631 "traddr": "10.0.0.2", 00:17:10.631 "adrfam": "ipv4", 00:17:10.631 "trsvcid": "4420", 00:17:10.631 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:10.631 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:10.631 "hdgst": false, 00:17:10.631 "ddgst": false 00:17:10.631 }, 00:17:10.631 "method": "bdev_nvme_attach_controller" 00:17:10.631 }' 00:17:10.631 [2024-04-24 19:47:51.917305] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:10.631 [2024-04-24 19:47:51.917399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722452 ] 00:17:10.631 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.631 [2024-04-24 19:47:51.983979] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.631 [2024-04-24 19:47:52.095422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.018 Running I/O for 1 seconds... 00:17:13.395 00:17:13.395 Latency(us) 00:17:13.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.395 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme1n1 : 1.11 173.32 10.83 0.00 0.00 359883.66 27573.67 304475.40 00:17:13.395 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme2n1 : 1.08 236.02 14.75 0.00 0.00 263856.55 20874.43 248551.35 00:17:13.395 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme3n1 : 1.15 277.29 17.33 0.00 0.00 221101.62 17961.72 246997.90 00:17:13.395 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme4n1 : 1.11 230.88 14.43 0.00 0.00 255475.67 22330.79 259425.47 00:17:13.395 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme5n1 : 1.17 219.64 13.73 0.00 0.00 270279.49 23204.60 290494.39 00:17:13.395 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme6n1 : 1.15 223.50 13.97 0.00 0.00 260755.15 22524.97 228356.55 00:17:13.395 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme7n1 : 1.18 271.63 16.98 0.00 0.00 211446.75 20486.07 273406.48 00:17:13.395 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme8n1 : 1.19 269.52 16.84 0.00 0.00 209327.90 19612.25 274959.93 00:17:13.395 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme9n1 : 1.17 221.80 13.86 0.00 0.00 249150.88 18155.90 259425.47 00:17:13.395 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.395 Verification LBA range: start 0x0 length 0x400 00:17:13.395 Nvme10n1 : 1.17 218.90 13.68 0.00 0.00 249007.22 23010.42 284280.60 00:17:13.395 =================================================================================================================== 00:17:13.395 Total : 2342.49 146.41 0.00 0.00 249597.72 17961.72 304475.40 00:17:13.654 19:47:54 -- target/shutdown.sh@94 -- # stoptarget 00:17:13.654 19:47:54 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:13.654 19:47:54 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:13.654 19:47:54 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:13.654 19:47:54 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:13.654 19:47:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:13.654 19:47:54 -- nvmf/common.sh@117 -- # sync 00:17:13.654 19:47:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.654 19:47:54 -- nvmf/common.sh@120 -- # set +e 00:17:13.654 19:47:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.654 19:47:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.654 rmmod nvme_tcp 00:17:13.654 rmmod nvme_fabrics 00:17:13.654 rmmod nvme_keyring 00:17:13.654 19:47:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.654 19:47:55 -- nvmf/common.sh@124 -- # set -e 00:17:13.654 19:47:55 -- nvmf/common.sh@125 -- # return 0 00:17:13.654 19:47:55 -- nvmf/common.sh@478 -- # '[' -n 1721970 ']' 00:17:13.654 19:47:55 -- nvmf/common.sh@479 -- # killprocess 1721970 00:17:13.654 19:47:55 -- common/autotest_common.sh@936 -- # '[' -z 1721970 ']' 00:17:13.654 19:47:55 -- common/autotest_common.sh@940 -- # kill -0 1721970 00:17:13.654 19:47:55 -- common/autotest_common.sh@941 -- # uname 00:17:13.654 19:47:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:13.654 19:47:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1721970 00:17:13.654 19:47:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:13.654 19:47:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:13.654 19:47:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1721970' 00:17:13.654 killing process with pid 1721970 00:17:13.654 19:47:55 -- common/autotest_common.sh@955 -- # kill 1721970 00:17:13.654 19:47:55 -- common/autotest_common.sh@960 -- # wait 1721970 00:17:14.222 19:47:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:14.222 19:47:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:14.222 19:47:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:14.222 19:47:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.222 19:47:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.222 19:47:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.222 19:47:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.222 19:47:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.133 19:47:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:16.133 00:17:16.133 real 0m11.460s 00:17:16.133 user 0m32.597s 00:17:16.133 sys 0m3.152s 00:17:16.133 19:47:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:16.133 19:47:57 -- common/autotest_common.sh@10 -- # set +x 00:17:16.133 ************************************ 00:17:16.133 END TEST nvmf_shutdown_tc1 00:17:16.133 ************************************ 00:17:16.133 19:47:57 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:17:16.133 19:47:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:16.133 19:47:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:16.133 19:47:57 -- common/autotest_common.sh@10 -- # set +x 00:17:16.392 ************************************ 00:17:16.392 START TEST nvmf_shutdown_tc2 00:17:16.392 ************************************ 00:17:16.392 19:47:57 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:17:16.392 19:47:57 -- target/shutdown.sh@99 -- # starttarget 00:17:16.392 19:47:57 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:16.392 19:47:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:16.392 19:47:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.392 19:47:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:16.392 19:47:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:16.392 19:47:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:16.392 19:47:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.392 19:47:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.392 19:47:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.392 19:47:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:16.392 19:47:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:16.392 19:47:57 -- common/autotest_common.sh@10 -- # set +x 00:17:16.392 19:47:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:16.392 19:47:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.392 19:47:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.392 19:47:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.392 19:47:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.392 19:47:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.392 19:47:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.392 19:47:57 -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.392 19:47:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.392 19:47:57 -- nvmf/common.sh@296 -- # e810=() 00:17:16.392 19:47:57 -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.392 19:47:57 -- nvmf/common.sh@297 -- # x722=() 00:17:16.392 19:47:57 -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.392 19:47:57 -- nvmf/common.sh@298 -- # mlx=() 00:17:16.392 19:47:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.392 19:47:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.392 19:47:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.392 19:47:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.392 19:47:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.392 19:47:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.392 19:47:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:16.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:16.392 19:47:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.392 19:47:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:16.392 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:16.392 19:47:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.392 19:47:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.392 19:47:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.392 19:47:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.392 19:47:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.392 19:47:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:16.392 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:16.392 19:47:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.392 19:47:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.392 19:47:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.392 19:47:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:16.392 19:47:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.392 19:47:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:16.392 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:16.392 19:47:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.392 19:47:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:16.392 19:47:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:16.392 19:47:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:16.392 19:47:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:16.392 19:47:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.392 19:47:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.392 19:47:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.393 19:47:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.393 19:47:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.393 19:47:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.393 19:47:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.393 19:47:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.393 19:47:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.393 19:47:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.393 19:47:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.393 19:47:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.393 19:47:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.393 19:47:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.393 19:47:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.393 19:47:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.393 19:47:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.393 19:47:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.393 19:47:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.393 19:47:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:17:16.393 00:17:16.393 --- 10.0.0.2 ping statistics --- 00:17:16.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.393 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:17:16.393 19:47:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:17:16.393 00:17:16.393 --- 10.0.0.1 ping statistics --- 00:17:16.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.393 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:17:16.393 19:47:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.393 19:47:57 -- nvmf/common.sh@411 -- # return 0 00:17:16.393 19:47:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:16.393 19:47:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.393 19:47:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:16.393 19:47:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:16.393 19:47:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.393 19:47:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:16.393 19:47:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:16.393 19:47:57 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:16.393 19:47:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:16.393 19:47:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:16.393 19:47:57 -- common/autotest_common.sh@10 -- # set +x 00:17:16.393 19:47:57 -- nvmf/common.sh@470 -- # nvmfpid=1723341 00:17:16.393 19:47:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:16.393 19:47:57 -- nvmf/common.sh@471 -- # waitforlisten 1723341 00:17:16.393 19:47:57 -- common/autotest_common.sh@817 -- # '[' -z 1723341 ']' 00:17:16.393 19:47:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.393 19:47:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:16.393 19:47:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.393 19:47:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:16.393 19:47:57 -- common/autotest_common.sh@10 -- # set +x 00:17:16.652 [2024-04-24 19:47:57.932448] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:16.652 [2024-04-24 19:47:57.932538] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.652 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.652 [2024-04-24 19:47:57.999385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.652 [2024-04-24 19:47:58.112281] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.652 [2024-04-24 19:47:58.112338] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.652 [2024-04-24 19:47:58.112351] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.652 [2024-04-24 19:47:58.112362] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.652 [2024-04-24 19:47:58.112372] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.652 [2024-04-24 19:47:58.112469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.652 [2024-04-24 19:47:58.112500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.652 [2024-04-24 19:47:58.112558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.652 [2024-04-24 19:47:58.112561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.912 19:47:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.912 19:47:58 -- common/autotest_common.sh@850 -- # return 0 00:17:16.912 19:47:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:16.912 19:47:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:16.912 19:47:58 -- common/autotest_common.sh@10 -- # set +x 00:17:16.912 19:47:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.912 19:47:58 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.912 19:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.912 19:47:58 -- common/autotest_common.sh@10 -- # set +x 00:17:16.912 [2024-04-24 19:47:58.274445] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.912 19:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.912 19:47:58 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:16.912 19:47:58 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:16.912 19:47:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:16.912 19:47:58 -- common/autotest_common.sh@10 -- # set +x 00:17:16.912 19:47:58 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:16.912 19:47:58 -- target/shutdown.sh@28 -- # cat 00:17:16.912 19:47:58 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:16.912 19:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.912 19:47:58 -- common/autotest_common.sh@10 -- # set +x 00:17:16.912 Malloc1 00:17:16.912 [2024-04-24 19:47:58.363840] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.912 Malloc2 00:17:17.171 Malloc3 00:17:17.171 Malloc4 00:17:17.171 Malloc5 00:17:17.171 Malloc6 00:17:17.171 Malloc7 00:17:17.431 Malloc8 00:17:17.431 Malloc9 00:17:17.431 Malloc10 00:17:17.431 19:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.431 19:47:58 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:17.431 19:47:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:17.431 19:47:58 -- common/autotest_common.sh@10 -- # set +x 00:17:17.431 19:47:58 -- target/shutdown.sh@103 -- # perfpid=1723413 00:17:17.431 19:47:58 -- target/shutdown.sh@104 -- # waitforlisten 1723413 /var/tmp/bdevperf.sock 00:17:17.431 19:47:58 -- common/autotest_common.sh@817 -- # '[' -z 1723413 ']' 00:17:17.431 19:47:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.431 19:47:58 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:17.431 19:47:58 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:17.431 19:47:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:17.431 19:47:58 -- nvmf/common.sh@521 -- # config=() 00:17:17.431 19:47:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.431 19:47:58 -- nvmf/common.sh@521 -- # local subsystem config 00:17:17.431 19:47:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- common/autotest_common.sh@10 -- # set +x 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.431 "trtype": "$TEST_TRANSPORT", 00:17:17.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.431 "adrfam": "ipv4", 00:17:17.431 "trsvcid": "$NVMF_PORT", 00:17:17.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.431 "hdgst": ${hdgst:-false}, 00:17:17.431 "ddgst": ${ddgst:-false} 00:17:17.431 }, 00:17:17.431 "method": "bdev_nvme_attach_controller" 00:17:17.431 } 00:17:17.431 EOF 00:17:17.431 )") 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.431 "trtype": "$TEST_TRANSPORT", 00:17:17.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.431 "adrfam": "ipv4", 00:17:17.431 "trsvcid": "$NVMF_PORT", 00:17:17.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.431 "hdgst": ${hdgst:-false}, 00:17:17.431 "ddgst": ${ddgst:-false} 00:17:17.431 }, 00:17:17.431 "method": "bdev_nvme_attach_controller" 00:17:17.431 } 00:17:17.431 EOF 00:17:17.431 )") 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.431 "trtype": "$TEST_TRANSPORT", 00:17:17.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.431 "adrfam": "ipv4", 00:17:17.431 "trsvcid": "$NVMF_PORT", 00:17:17.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.431 "hdgst": ${hdgst:-false}, 00:17:17.431 "ddgst": ${ddgst:-false} 00:17:17.431 }, 00:17:17.431 "method": "bdev_nvme_attach_controller" 00:17:17.431 } 00:17:17.431 EOF 00:17:17.431 )") 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.431 "trtype": "$TEST_TRANSPORT", 00:17:17.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.431 "adrfam": "ipv4", 00:17:17.431 "trsvcid": "$NVMF_PORT", 00:17:17.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.431 "hdgst": ${hdgst:-false}, 00:17:17.431 "ddgst": ${ddgst:-false} 00:17:17.431 }, 00:17:17.431 "method": "bdev_nvme_attach_controller" 00:17:17.431 } 00:17:17.431 EOF 00:17:17.431 )") 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.431 "trtype": "$TEST_TRANSPORT", 00:17:17.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.431 "adrfam": "ipv4", 00:17:17.431 "trsvcid": "$NVMF_PORT", 00:17:17.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.431 "hdgst": ${hdgst:-false}, 00:17:17.431 "ddgst": ${ddgst:-false} 00:17:17.431 }, 00:17:17.431 "method": "bdev_nvme_attach_controller" 00:17:17.431 } 00:17:17.431 EOF 00:17:17.431 )") 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.431 "trtype": "$TEST_TRANSPORT", 00:17:17.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.431 "adrfam": "ipv4", 00:17:17.431 "trsvcid": "$NVMF_PORT", 00:17:17.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.431 "hdgst": ${hdgst:-false}, 00:17:17.431 "ddgst": ${ddgst:-false} 00:17:17.431 }, 00:17:17.431 "method": "bdev_nvme_attach_controller" 00:17:17.431 } 00:17:17.431 EOF 00:17:17.431 )") 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.431 "trtype": "$TEST_TRANSPORT", 00:17:17.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.431 "adrfam": "ipv4", 00:17:17.431 "trsvcid": "$NVMF_PORT", 00:17:17.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.431 "hdgst": ${hdgst:-false}, 00:17:17.431 "ddgst": ${ddgst:-false} 00:17:17.431 }, 00:17:17.431 "method": "bdev_nvme_attach_controller" 00:17:17.431 } 00:17:17.431 EOF 00:17:17.431 )") 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.431 "trtype": "$TEST_TRANSPORT", 00:17:17.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.431 "adrfam": "ipv4", 00:17:17.431 "trsvcid": "$NVMF_PORT", 00:17:17.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.431 "hdgst": ${hdgst:-false}, 00:17:17.431 "ddgst": ${ddgst:-false} 00:17:17.431 }, 00:17:17.431 "method": "bdev_nvme_attach_controller" 00:17:17.431 } 00:17:17.431 EOF 00:17:17.431 )") 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.431 "trtype": "$TEST_TRANSPORT", 00:17:17.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.431 "adrfam": "ipv4", 00:17:17.431 "trsvcid": "$NVMF_PORT", 00:17:17.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.431 "hdgst": ${hdgst:-false}, 00:17:17.431 "ddgst": ${ddgst:-false} 00:17:17.431 }, 00:17:17.431 "method": "bdev_nvme_attach_controller" 00:17:17.431 } 00:17:17.431 EOF 00:17:17.431 )") 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.431 19:47:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:17.431 19:47:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:17.431 { 00:17:17.431 "params": { 00:17:17.431 "name": "Nvme$subsystem", 00:17:17.432 "trtype": "$TEST_TRANSPORT", 00:17:17.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "$NVMF_PORT", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.432 "hdgst": ${hdgst:-false}, 00:17:17.432 "ddgst": ${ddgst:-false} 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 } 00:17:17.432 EOF 00:17:17.432 )") 00:17:17.432 19:47:58 -- nvmf/common.sh@543 -- # cat 00:17:17.432 19:47:58 -- nvmf/common.sh@545 -- # jq . 00:17:17.432 19:47:58 -- nvmf/common.sh@546 -- # IFS=, 00:17:17.432 19:47:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme1", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 },{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme2", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 },{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme3", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 },{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme4", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 },{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme5", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 },{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme6", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 },{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme7", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 },{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme8", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 },{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme9", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 },{ 00:17:17.432 "params": { 00:17:17.432 "name": "Nvme10", 00:17:17.432 "trtype": "tcp", 00:17:17.432 "traddr": "10.0.0.2", 00:17:17.432 "adrfam": "ipv4", 00:17:17.432 "trsvcid": "4420", 00:17:17.432 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:17.432 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:17.432 "hdgst": false, 00:17:17.432 "ddgst": false 00:17:17.432 }, 00:17:17.432 "method": "bdev_nvme_attach_controller" 00:17:17.432 }' 00:17:17.432 [2024-04-24 19:47:58.882424] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:17.432 [2024-04-24 19:47:58.882516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723413 ] 00:17:17.432 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.692 [2024-04-24 19:47:58.948568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.692 [2024-04-24 19:47:59.059553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.639 Running I/O for 10 seconds... 00:17:19.639 19:48:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:19.639 19:48:00 -- common/autotest_common.sh@850 -- # return 0 00:17:19.639 19:48:00 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:19.639 19:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.639 19:48:00 -- common/autotest_common.sh@10 -- # set +x 00:17:19.639 19:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.639 19:48:00 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:19.639 19:48:00 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:19.639 19:48:00 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:19.639 19:48:00 -- target/shutdown.sh@57 -- # local ret=1 00:17:19.639 19:48:00 -- target/shutdown.sh@58 -- # local i 00:17:19.639 19:48:00 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:19.639 19:48:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:19.639 19:48:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:19.639 19:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.639 19:48:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:19.639 19:48:00 -- common/autotest_common.sh@10 -- # set +x 00:17:19.639 19:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.639 19:48:00 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:19.639 19:48:00 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:19.639 19:48:00 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:19.898 19:48:01 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:19.898 19:48:01 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:19.898 19:48:01 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:19.898 19:48:01 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:19.898 19:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.898 19:48:01 -- common/autotest_common.sh@10 -- # set +x 00:17:19.898 19:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.898 19:48:01 -- target/shutdown.sh@60 -- # read_io_count=67 00:17:19.898 19:48:01 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:17:19.898 19:48:01 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:20.156 19:48:01 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:20.156 19:48:01 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:20.156 19:48:01 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:20.156 19:48:01 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:20.156 19:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.156 19:48:01 -- common/autotest_common.sh@10 -- # set +x 00:17:20.156 19:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.156 19:48:01 -- target/shutdown.sh@60 -- # read_io_count=131 00:17:20.156 19:48:01 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:17:20.156 19:48:01 -- target/shutdown.sh@64 -- # ret=0 00:17:20.156 19:48:01 -- target/shutdown.sh@65 -- # break 00:17:20.156 19:48:01 -- target/shutdown.sh@69 -- # return 0 00:17:20.156 19:48:01 -- target/shutdown.sh@110 -- # killprocess 1723413 00:17:20.156 19:48:01 -- common/autotest_common.sh@936 -- # '[' -z 1723413 ']' 00:17:20.156 19:48:01 -- common/autotest_common.sh@940 -- # kill -0 1723413 00:17:20.156 19:48:01 -- common/autotest_common.sh@941 -- # uname 00:17:20.156 19:48:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.156 19:48:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1723413 00:17:20.156 19:48:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:20.156 19:48:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:20.156 19:48:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1723413' 00:17:20.156 killing process with pid 1723413 00:17:20.156 19:48:01 -- common/autotest_common.sh@955 -- # kill 1723413 00:17:20.156 19:48:01 -- common/autotest_common.sh@960 -- # wait 1723413 00:17:20.156 Received shutdown signal, test time was about 0.947022 seconds 00:17:20.156 00:17:20.156 Latency(us) 00:17:20.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.156 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.156 Verification LBA range: start 0x0 length 0x400 00:17:20.156 Nvme1n1 : 0.92 209.66 13.10 0.00 0.00 301717.68 22816.24 264085.81 00:17:20.156 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.156 Verification LBA range: start 0x0 length 0x400 00:17:20.156 Nvme2n1 : 0.91 210.77 13.17 0.00 0.00 293147.18 21942.42 239230.67 00:17:20.156 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.157 Verification LBA range: start 0x0 length 0x400 00:17:20.157 Nvme3n1 : 0.93 229.82 14.36 0.00 0.00 257913.31 9611.95 260978.92 00:17:20.157 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.157 Verification LBA range: start 0x0 length 0x400 00:17:20.157 Nvme4n1 : 0.92 278.02 17.38 0.00 0.00 213287.44 26991.12 240784.12 00:17:20.157 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.157 Verification LBA range: start 0x0 length 0x400 00:17:20.157 Nvme5n1 : 0.93 207.33 12.96 0.00 0.00 280649.83 25049.32 229910.00 00:17:20.157 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.157 Verification LBA range: start 0x0 length 0x400 00:17:20.157 Nvme6n1 : 0.94 204.21 12.76 0.00 0.00 279220.78 41748.86 257872.02 00:17:20.157 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.157 Verification LBA range: start 0x0 length 0x400 00:17:20.157 Nvme7n1 : 0.95 275.84 17.24 0.00 0.00 201541.80 4805.97 254765.13 00:17:20.157 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.157 Verification LBA range: start 0x0 length 0x400 00:17:20.157 Nvme8n1 : 0.94 273.37 17.09 0.00 0.00 199002.45 20486.07 253211.69 00:17:20.157 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.157 Verification LBA range: start 0x0 length 0x400 00:17:20.157 Nvme9n1 : 0.94 211.72 13.23 0.00 0.00 250013.34 6310.87 278066.82 00:17:20.157 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:20.157 Verification LBA range: start 0x0 length 0x400 00:17:20.157 Nvme10n1 : 0.95 203.12 12.70 0.00 0.00 257258.64 32039.82 288940.94 00:17:20.157 =================================================================================================================== 00:17:20.157 Total : 2303.87 143.99 0.00 0.00 248926.61 4805.97 288940.94 00:17:20.720 19:48:01 -- target/shutdown.sh@113 -- # sleep 1 00:17:21.656 19:48:02 -- target/shutdown.sh@114 -- # kill -0 1723341 00:17:21.656 19:48:02 -- target/shutdown.sh@116 -- # stoptarget 00:17:21.656 19:48:02 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:21.656 19:48:02 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:21.656 19:48:02 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:21.656 19:48:02 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:21.656 19:48:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:21.656 19:48:02 -- nvmf/common.sh@117 -- # sync 00:17:21.656 19:48:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.656 19:48:02 -- nvmf/common.sh@120 -- # set +e 00:17:21.656 19:48:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.656 19:48:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.656 rmmod nvme_tcp 00:17:21.656 rmmod nvme_fabrics 00:17:21.656 rmmod nvme_keyring 00:17:21.656 19:48:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.656 19:48:02 -- nvmf/common.sh@124 -- # set -e 00:17:21.656 19:48:02 -- nvmf/common.sh@125 -- # return 0 00:17:21.656 19:48:02 -- nvmf/common.sh@478 -- # '[' -n 1723341 ']' 00:17:21.656 19:48:02 -- nvmf/common.sh@479 -- # killprocess 1723341 00:17:21.656 19:48:02 -- common/autotest_common.sh@936 -- # '[' -z 1723341 ']' 00:17:21.656 19:48:02 -- common/autotest_common.sh@940 -- # kill -0 1723341 00:17:21.656 19:48:02 -- common/autotest_common.sh@941 -- # uname 00:17:21.656 19:48:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.656 19:48:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1723341 00:17:21.656 19:48:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:21.656 19:48:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:21.656 19:48:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1723341' 00:17:21.656 killing process with pid 1723341 00:17:21.656 19:48:03 -- common/autotest_common.sh@955 -- # kill 1723341 00:17:21.656 19:48:03 -- common/autotest_common.sh@960 -- # wait 1723341 00:17:22.222 19:48:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:22.222 19:48:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:22.222 19:48:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:22.222 19:48:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.222 19:48:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.222 19:48:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.222 19:48:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.222 19:48:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.133 19:48:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:24.133 00:17:24.133 real 0m7.879s 00:17:24.133 user 0m23.687s 00:17:24.133 sys 0m1.666s 00:17:24.133 19:48:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.133 19:48:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.133 ************************************ 00:17:24.133 END TEST nvmf_shutdown_tc2 00:17:24.133 ************************************ 00:17:24.133 19:48:05 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:24.133 19:48:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:24.133 19:48:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:24.134 19:48:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.393 ************************************ 00:17:24.393 START TEST nvmf_shutdown_tc3 00:17:24.393 ************************************ 00:17:24.393 19:48:05 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:17:24.393 19:48:05 -- target/shutdown.sh@121 -- # starttarget 00:17:24.393 19:48:05 -- target/shutdown.sh@15 -- # nvmftestinit 00:17:24.393 19:48:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:24.393 19:48:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.393 19:48:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:24.393 19:48:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:24.393 19:48:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:24.393 19:48:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.393 19:48:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.393 19:48:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.393 19:48:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:24.393 19:48:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.393 19:48:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.393 19:48:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:24.393 19:48:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:24.393 19:48:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:24.393 19:48:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:24.393 19:48:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:24.393 19:48:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:24.393 19:48:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:24.393 19:48:05 -- nvmf/common.sh@295 -- # net_devs=() 00:17:24.393 19:48:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:24.393 19:48:05 -- nvmf/common.sh@296 -- # e810=() 00:17:24.393 19:48:05 -- nvmf/common.sh@296 -- # local -ga e810 00:17:24.393 19:48:05 -- nvmf/common.sh@297 -- # x722=() 00:17:24.393 19:48:05 -- nvmf/common.sh@297 -- # local -ga x722 00:17:24.393 19:48:05 -- nvmf/common.sh@298 -- # mlx=() 00:17:24.393 19:48:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:24.393 19:48:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.393 19:48:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:24.393 19:48:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:24.393 19:48:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:24.393 19:48:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.393 19:48:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:24.393 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:24.393 19:48:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.393 19:48:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:24.393 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:24.393 19:48:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:24.393 19:48:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.393 19:48:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.393 19:48:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:24.393 19:48:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.393 19:48:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:24.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:24.393 19:48:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.393 19:48:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.393 19:48:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.393 19:48:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:24.393 19:48:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.393 19:48:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:24.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:24.393 19:48:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.393 19:48:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:24.393 19:48:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:24.393 19:48:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:24.393 19:48:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:24.393 19:48:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.393 19:48:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.393 19:48:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.393 19:48:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:24.393 19:48:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.393 19:48:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.394 19:48:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:24.394 19:48:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.394 19:48:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.394 19:48:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:24.394 19:48:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:24.394 19:48:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.394 19:48:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.394 19:48:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.394 19:48:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.394 19:48:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:24.394 19:48:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.394 19:48:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.394 19:48:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.394 19:48:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:24.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:17:24.394 00:17:24.394 --- 10.0.0.2 ping statistics --- 00:17:24.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.394 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:24.394 19:48:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:17:24.394 00:17:24.394 --- 10.0.0.1 ping statistics --- 00:17:24.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.394 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:17:24.394 19:48:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.394 19:48:05 -- nvmf/common.sh@411 -- # return 0 00:17:24.394 19:48:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:24.394 19:48:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.394 19:48:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:24.394 19:48:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:24.394 19:48:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.394 19:48:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:24.394 19:48:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:24.394 19:48:05 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:17:24.394 19:48:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:24.394 19:48:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:24.394 19:48:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.394 19:48:05 -- nvmf/common.sh@470 -- # nvmfpid=1724556 00:17:24.394 19:48:05 -- nvmf/common.sh@471 -- # waitforlisten 1724556 00:17:24.394 19:48:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:24.394 19:48:05 -- common/autotest_common.sh@817 -- # '[' -z 1724556 ']' 00:17:24.394 19:48:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.394 19:48:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:24.394 19:48:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.394 19:48:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:24.394 19:48:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.653 [2024-04-24 19:48:05.938426] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:24.653 [2024-04-24 19:48:05.938528] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.653 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.653 [2024-04-24 19:48:06.009470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.653 [2024-04-24 19:48:06.125429] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.653 [2024-04-24 19:48:06.125497] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.653 [2024-04-24 19:48:06.125514] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.653 [2024-04-24 19:48:06.125527] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.653 [2024-04-24 19:48:06.125540] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.653 [2024-04-24 19:48:06.125662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.653 [2024-04-24 19:48:06.125755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.653 [2024-04-24 19:48:06.125816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:24.653 [2024-04-24 19:48:06.125819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.589 19:48:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:25.589 19:48:06 -- common/autotest_common.sh@850 -- # return 0 00:17:25.589 19:48:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:25.589 19:48:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:25.589 19:48:06 -- common/autotest_common.sh@10 -- # set +x 00:17:25.589 19:48:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.589 19:48:06 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.589 19:48:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.589 19:48:06 -- common/autotest_common.sh@10 -- # set +x 00:17:25.589 [2024-04-24 19:48:06.897566] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.589 19:48:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.589 19:48:06 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:17:25.589 19:48:06 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:17:25.589 19:48:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:25.589 19:48:06 -- common/autotest_common.sh@10 -- # set +x 00:17:25.589 19:48:06 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:17:25.589 19:48:06 -- target/shutdown.sh@28 -- # cat 00:17:25.589 19:48:06 -- target/shutdown.sh@35 -- # rpc_cmd 00:17:25.589 19:48:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.589 19:48:06 -- common/autotest_common.sh@10 -- # set +x 00:17:25.589 Malloc1 00:17:25.589 [2024-04-24 19:48:06.973297] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.589 Malloc2 00:17:25.589 Malloc3 00:17:25.589 Malloc4 00:17:25.847 Malloc5 00:17:25.847 Malloc6 00:17:25.847 Malloc7 00:17:25.847 Malloc8 00:17:25.847 Malloc9 00:17:26.107 Malloc10 00:17:26.107 19:48:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.107 19:48:07 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:26.107 19:48:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:26.107 19:48:07 -- common/autotest_common.sh@10 -- # set +x 00:17:26.107 19:48:07 -- target/shutdown.sh@125 -- # perfpid=1724952 00:17:26.107 19:48:07 -- target/shutdown.sh@126 -- # waitforlisten 1724952 /var/tmp/bdevperf.sock 00:17:26.107 19:48:07 -- common/autotest_common.sh@817 -- # '[' -z 1724952 ']' 00:17:26.107 19:48:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.107 19:48:07 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:26.107 19:48:07 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:26.107 19:48:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.107 19:48:07 -- nvmf/common.sh@521 -- # config=() 00:17:26.107 19:48:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.107 19:48:07 -- nvmf/common.sh@521 -- # local subsystem config 00:17:26.107 19:48:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.107 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.107 19:48:07 -- common/autotest_common.sh@10 -- # set +x 00:17:26.107 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.107 { 00:17:26.107 "params": { 00:17:26.107 "name": "Nvme$subsystem", 00:17:26.107 "trtype": "$TEST_TRANSPORT", 00:17:26.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.107 "adrfam": "ipv4", 00:17:26.107 "trsvcid": "$NVMF_PORT", 00:17:26.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.107 "hdgst": ${hdgst:-false}, 00:17:26.107 "ddgst": ${ddgst:-false} 00:17:26.107 }, 00:17:26.107 "method": "bdev_nvme_attach_controller" 00:17:26.107 } 00:17:26.107 EOF 00:17:26.107 )") 00:17:26.107 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.107 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.107 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.107 { 00:17:26.107 "params": { 00:17:26.107 "name": "Nvme$subsystem", 00:17:26.107 "trtype": "$TEST_TRANSPORT", 00:17:26.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.107 "adrfam": "ipv4", 00:17:26.107 "trsvcid": "$NVMF_PORT", 00:17:26.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.107 "hdgst": ${hdgst:-false}, 00:17:26.107 "ddgst": ${ddgst:-false} 00:17:26.107 }, 00:17:26.107 "method": "bdev_nvme_attach_controller" 00:17:26.107 } 00:17:26.107 EOF 00:17:26.107 )") 00:17:26.107 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.107 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.107 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.107 { 00:17:26.107 "params": { 00:17:26.107 "name": "Nvme$subsystem", 00:17:26.107 "trtype": "$TEST_TRANSPORT", 00:17:26.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.107 "adrfam": "ipv4", 00:17:26.107 "trsvcid": "$NVMF_PORT", 00:17:26.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.107 "hdgst": ${hdgst:-false}, 00:17:26.107 "ddgst": ${ddgst:-false} 00:17:26.107 }, 00:17:26.107 "method": "bdev_nvme_attach_controller" 00:17:26.107 } 00:17:26.107 EOF 00:17:26.107 )") 00:17:26.107 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.107 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.107 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.107 { 00:17:26.107 "params": { 00:17:26.107 "name": "Nvme$subsystem", 00:17:26.107 "trtype": "$TEST_TRANSPORT", 00:17:26.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.107 "adrfam": "ipv4", 00:17:26.107 "trsvcid": "$NVMF_PORT", 00:17:26.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.107 "hdgst": ${hdgst:-false}, 00:17:26.107 "ddgst": ${ddgst:-false} 00:17:26.107 }, 00:17:26.107 "method": "bdev_nvme_attach_controller" 00:17:26.107 } 00:17:26.107 EOF 00:17:26.107 )") 00:17:26.107 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.107 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.107 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.107 { 00:17:26.107 "params": { 00:17:26.107 "name": "Nvme$subsystem", 00:17:26.107 "trtype": "$TEST_TRANSPORT", 00:17:26.107 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.107 "adrfam": "ipv4", 00:17:26.107 "trsvcid": "$NVMF_PORT", 00:17:26.107 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.107 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.107 "hdgst": ${hdgst:-false}, 00:17:26.107 "ddgst": ${ddgst:-false} 00:17:26.107 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 } 00:17:26.108 EOF 00:17:26.108 )") 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.108 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.108 { 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme$subsystem", 00:17:26.108 "trtype": "$TEST_TRANSPORT", 00:17:26.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "$NVMF_PORT", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.108 "hdgst": ${hdgst:-false}, 00:17:26.108 "ddgst": ${ddgst:-false} 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 } 00:17:26.108 EOF 00:17:26.108 )") 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.108 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.108 { 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme$subsystem", 00:17:26.108 "trtype": "$TEST_TRANSPORT", 00:17:26.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "$NVMF_PORT", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.108 "hdgst": ${hdgst:-false}, 00:17:26.108 "ddgst": ${ddgst:-false} 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 } 00:17:26.108 EOF 00:17:26.108 )") 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.108 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.108 { 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme$subsystem", 00:17:26.108 "trtype": "$TEST_TRANSPORT", 00:17:26.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "$NVMF_PORT", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.108 "hdgst": ${hdgst:-false}, 00:17:26.108 "ddgst": ${ddgst:-false} 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 } 00:17:26.108 EOF 00:17:26.108 )") 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.108 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.108 { 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme$subsystem", 00:17:26.108 "trtype": "$TEST_TRANSPORT", 00:17:26.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "$NVMF_PORT", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.108 "hdgst": ${hdgst:-false}, 00:17:26.108 "ddgst": ${ddgst:-false} 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 } 00:17:26.108 EOF 00:17:26.108 )") 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.108 19:48:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.108 { 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme$subsystem", 00:17:26.108 "trtype": "$TEST_TRANSPORT", 00:17:26.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "$NVMF_PORT", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.108 "hdgst": ${hdgst:-false}, 00:17:26.108 "ddgst": ${ddgst:-false} 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 } 00:17:26.108 EOF 00:17:26.108 )") 00:17:26.108 19:48:07 -- nvmf/common.sh@543 -- # cat 00:17:26.108 19:48:07 -- nvmf/common.sh@545 -- # jq . 00:17:26.108 19:48:07 -- nvmf/common.sh@546 -- # IFS=, 00:17:26.108 19:48:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme1", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 },{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme2", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 },{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme3", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 },{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme4", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 },{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme5", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 },{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme6", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 },{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme7", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 },{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme8", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 },{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme9", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 },{ 00:17:26.108 "params": { 00:17:26.108 "name": "Nvme10", 00:17:26.108 "trtype": "tcp", 00:17:26.108 "traddr": "10.0.0.2", 00:17:26.108 "adrfam": "ipv4", 00:17:26.108 "trsvcid": "4420", 00:17:26.108 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:26.108 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:26.108 "hdgst": false, 00:17:26.108 "ddgst": false 00:17:26.108 }, 00:17:26.108 "method": "bdev_nvme_attach_controller" 00:17:26.108 }' 00:17:26.108 [2024-04-24 19:48:07.481130] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:26.108 [2024-04-24 19:48:07.481221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724952 ] 00:17:26.108 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.108 [2024-04-24 19:48:07.544824] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.367 [2024-04-24 19:48:07.655875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.266 Running I/O for 10 seconds... 00:17:28.266 19:48:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:28.266 19:48:09 -- common/autotest_common.sh@850 -- # return 0 00:17:28.266 19:48:09 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:28.266 19:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.266 19:48:09 -- common/autotest_common.sh@10 -- # set +x 00:17:28.266 19:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.266 19:48:09 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:28.266 19:48:09 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:28.266 19:48:09 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:28.266 19:48:09 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:28.266 19:48:09 -- target/shutdown.sh@57 -- # local ret=1 00:17:28.266 19:48:09 -- target/shutdown.sh@58 -- # local i 00:17:28.266 19:48:09 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:28.267 19:48:09 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:28.267 19:48:09 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:28.267 19:48:09 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:28.267 19:48:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.267 19:48:09 -- common/autotest_common.sh@10 -- # set +x 00:17:28.267 19:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.267 19:48:09 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:28.267 19:48:09 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:28.267 19:48:09 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:28.525 19:48:10 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:28.525 19:48:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:28.525 19:48:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:28.525 19:48:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:28.525 19:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.525 19:48:10 -- common/autotest_common.sh@10 -- # set +x 00:17:28.525 19:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.783 19:48:10 -- target/shutdown.sh@60 -- # read_io_count=67 00:17:28.783 19:48:10 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:17:28.783 19:48:10 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:28.783 19:48:10 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:28.783 19:48:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:29.055 19:48:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:29.055 19:48:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:29.055 19:48:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.055 19:48:10 -- common/autotest_common.sh@10 -- # set +x 00:17:29.055 19:48:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.055 19:48:10 -- target/shutdown.sh@60 -- # read_io_count=131 00:17:29.055 19:48:10 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:17:29.055 19:48:10 -- target/shutdown.sh@64 -- # ret=0 00:17:29.055 19:48:10 -- target/shutdown.sh@65 -- # break 00:17:29.055 19:48:10 -- target/shutdown.sh@69 -- # return 0 00:17:29.055 19:48:10 -- target/shutdown.sh@135 -- # killprocess 1724556 00:17:29.055 19:48:10 -- common/autotest_common.sh@936 -- # '[' -z 1724556 ']' 00:17:29.055 19:48:10 -- common/autotest_common.sh@940 -- # kill -0 1724556 00:17:29.055 19:48:10 -- common/autotest_common.sh@941 -- # uname 00:17:29.055 19:48:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.055 19:48:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1724556 00:17:29.055 19:48:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:29.055 19:48:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:29.055 19:48:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1724556' 00:17:29.055 killing process with pid 1724556 00:17:29.055 19:48:10 -- common/autotest_common.sh@955 -- # kill 1724556 00:17:29.055 19:48:10 -- common/autotest_common.sh@960 -- # wait 1724556 00:17:29.055 [2024-04-24 19:48:10.370543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.370988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371392] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.055 [2024-04-24 19:48:10.371404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.371416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.371428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.371441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.371453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.371466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.371478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0c4d0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374790] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.374994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.375006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.375019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.375031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.375044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.375057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.375069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.375082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.375094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.375106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a4c0 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376390] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.056 [2024-04-24 19:48:10.376472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.376992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.377114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0a950 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378725] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.378992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.379004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.379018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.379006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.057 [2024-04-24 19:48:10.379035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.379049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.379052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.057 [2024-04-24 19:48:10.379063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.379071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.057 [2024-04-24 19:48:10.379076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.057 [2024-04-24 19:48:10.379087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-04-24 19:48:10.379130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with id:0 cdw10:00000000 cdw11:00000000 00:17:29.058 the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-04-24 19:48:10.379144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with [2024-04-24 19:48:10.379160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b5880 is same wthe state(5) to be set 00:17:29.058 ith the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379200] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with [2024-04-24 19:48:10.379237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:17:29.058 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccb390 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-04-24 19:48:10.379429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with id:0 cdw10:00000000 cdw11:00000000 00:17:29.058 the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with [2024-04-24 19:48:10.379448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:17:29.058 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with [2024-04-24 19:48:10.379465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:17:29.058 id:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-04-24 19:48:10.379483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with [2024-04-24 19:48:10.379498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:17:29.058 id:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with [2024-04-24 19:48:10.379515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:17:29.058 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with [2024-04-24 19:48:10.379529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7ac0 is same wthe state(5) to be set 00:17:29.058 ith the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b290 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe92d80 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec060 is same with the state(5) to be set 00:17:29.058 [2024-04-24 19:48:10.379945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.379979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.379993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.058 [2024-04-24 19:48:10.380006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.058 [2024-04-24 19:48:10.380020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.059 [2024-04-24 19:48:10.380046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe911d0 is same with the state(5) to be set 00:17:29.059 [2024-04-24 19:48:10.380363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.380976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.380991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.059 [2024-04-24 19:48:10.381289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.059 [2024-04-24 19:48:10.381303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with [2024-04-24 19:48:10.381332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:17:29.060 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128[2024-04-24 19:48:10.381415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128[2024-04-24 19:48:10.381475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 19:48:10.381490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 19:48:10.381561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with [2024-04-24 19:48:10.381593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:17:29.060 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381678] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:12[2024-04-24 19:48:10.381760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 19:48:10.381778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with [2024-04-24 19:48:10.381895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:12the state(5) to be set 00:17:29.060 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with [2024-04-24 19:48:10.381925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:17:29.060 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 [2024-04-24 19:48:10.381939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.060 [2024-04-24 19:48:10.381943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.060 [2024-04-24 19:48:10.381958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 19:48:10.381958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.060 the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.381973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.381974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.381986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.381992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.381999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:12[2024-04-24 19:48:10.382066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 19:48:10.382081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:12[2024-04-24 19:48:10.382133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12[2024-04-24 19:48:10.382161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0b720 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.382246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.382411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.382452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:29.061 [2024-04-24 19:48:10.382540] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde1910 was disconnected and freed. reset controller. 00:17:29.061 [2024-04-24 19:48:10.383159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f0bbb0 is same with the state(5) to be set 00:17:29.061 [2024-04-24 19:48:10.383275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.061 [2024-04-24 19:48:10.383767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.061 [2024-04-24 19:48:10.383780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.383800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.383814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.383829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.383844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.383859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.383872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.383899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.383912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.383928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.383942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.383956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.383970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.383985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.384978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.062 [2024-04-24 19:48:10.384994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.062 [2024-04-24 19:48:10.403847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.403933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.403953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.403969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.403986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.404318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.404427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:29.063 [2024-04-24 19:48:10.404537] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe63020 was disconnected and freed. reset controller. 00:17:29.063 [2024-04-24 19:48:10.405421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.405973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.405989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.406002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.406018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.406033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.406049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.406063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.406080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.406093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.406110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.406124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.406139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.406153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.406173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.406187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.406203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.406216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.063 [2024-04-24 19:48:10.406233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.063 [2024-04-24 19:48:10.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.406972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.406988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.064 [2024-04-24 19:48:10.407287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.064 [2024-04-24 19:48:10.407301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.407335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.407365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.407395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.407425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:29.065 [2024-04-24 19:48:10.407532] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcc4500 was disconnected and freed. reset controller. 00:17:29.065 [2024-04-24 19:48:10.407769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.407794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.407824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.407852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.407879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd14630 is same with the state(5) to be set 00:17:29.065 [2024-04-24 19:48:10.407945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.407965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.407981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.407995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34f60 is same with the state(5) to be set 00:17:29.065 [2024-04-24 19:48:10.408120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17690 is same with the state(5) to be set 00:17:29.065 [2024-04-24 19:48:10.408266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b5880 (9): Bad file descriptor 00:17:29.065 [2024-04-24 19:48:10.408294] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccb390 (9): Bad file descriptor 00:17:29.065 [2024-04-24 19:48:10.408326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf7ac0 (9): Bad file descriptor 00:17:29.065 [2024-04-24 19:48:10.408351] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe92d80 (9): Bad file descriptor 00:17:29.065 [2024-04-24 19:48:10.408379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcec060 (9): Bad file descriptor 00:17:29.065 [2024-04-24 19:48:10.408408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe911d0 (9): Bad file descriptor 00:17:29.065 [2024-04-24 19:48:10.408451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.065 [2024-04-24 19:48:10.408564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.408577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0af20 is same with the state(5) to be set 00:17:29.065 [2024-04-24 19:48:10.409823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.409847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.409868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.409883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.409909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.409923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.409939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.409952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.409967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.409980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.409995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.410009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.410025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.410039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.410055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.410068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.410084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.410097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.410112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.410126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.410142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.410156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.410171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.410184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.410200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.410219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.065 [2024-04-24 19:48:10.410236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.065 [2024-04-24 19:48:10.410251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.410966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.410983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.066 [2024-04-24 19:48:10.411432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.066 [2024-04-24 19:48:10.411447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.411775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.411867] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe5f280 was disconnected and freed. reset controller. 00:17:29.067 [2024-04-24 19:48:10.414576] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:29.067 [2024-04-24 19:48:10.414624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:29.067 [2024-04-24 19:48:10.417754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:29.067 [2024-04-24 19:48:10.417790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:29.067 [2024-04-24 19:48:10.417828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd14630 (9): Bad file descriptor 00:17:29.067 [2024-04-24 19:48:10.418048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.067 [2024-04-24 19:48:10.418219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.067 [2024-04-24 19:48:10.418246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b5880 with addr=10.0.0.2, port=4420 00:17:29.067 [2024-04-24 19:48:10.418264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b5880 is same with the state(5) to be set 00:17:29.067 [2024-04-24 19:48:10.418425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.067 [2024-04-24 19:48:10.418581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.067 [2024-04-24 19:48:10.418606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe911d0 with addr=10.0.0.2, port=4420 00:17:29.067 [2024-04-24 19:48:10.418623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe911d0 is same with the state(5) to be set 00:17:29.067 [2024-04-24 19:48:10.418658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd34f60 (9): Bad file descriptor 00:17:29.067 [2024-04-24 19:48:10.418705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd17690 (9): Bad file descriptor 00:17:29.067 [2024-04-24 19:48:10.418767] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0af20 (9): Bad file descriptor 00:17:29.067 [2024-04-24 19:48:10.419854] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcc3120 was disconnected and freed. reset controller. 00:17:29.067 [2024-04-24 19:48:10.420195] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:29.067 [2024-04-24 19:48:10.420400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.067 [2024-04-24 19:48:10.420564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.067 [2024-04-24 19:48:10.420589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe92d80 with addr=10.0.0.2, port=4420 00:17:29.067 [2024-04-24 19:48:10.420606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe92d80 is same with the state(5) to be set 00:17:29.067 [2024-04-24 19:48:10.420646] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b5880 (9): Bad file descriptor 00:17:29.067 [2024-04-24 19:48:10.420683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe911d0 (9): Bad file descriptor 00:17:29.067 [2024-04-24 19:48:10.421048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.067 [2024-04-24 19:48:10.421645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.067 [2024-04-24 19:48:10.421660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.421976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.421990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.068 [2024-04-24 19:48:10.422675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.068 [2024-04-24 19:48:10.422689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.422969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.422983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.423007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.423030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.424975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.424990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.425003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.425019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.425032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.425048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.425066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.425082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.425096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.425111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.425125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.425140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.069 [2024-04-24 19:48:10.425153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.069 [2024-04-24 19:48:10.425169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.425976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.425990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.426005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.426018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.426033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.426047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.426062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.426075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.426090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.426104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.426119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.426133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.426148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.426161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.426180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.426194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.426211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.426226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.427534] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:29.070 [2024-04-24 19:48:10.427644] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:29.070 [2024-04-24 19:48:10.428018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.428042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.428065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.070 [2024-04-24 19:48:10.428081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.070 [2024-04-24 19:48:10.428097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.428978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.428991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.429006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.429019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.429034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.429057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.429072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.429085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.429104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.429123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.429139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.429152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.429168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.429181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.429196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.429209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.429225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.429238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.437962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.438023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.438040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.071 [2024-04-24 19:48:10.438055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.071 [2024-04-24 19:48:10.438071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.072 [2024-04-24 19:48:10.438736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.072 [2024-04-24 19:48:10.438754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1a060 is same with the state(5) to be set 00:17:29.072 [2024-04-24 19:48:10.440550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:29.072 [2024-04-24 19:48:10.440589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:29.072 [2024-04-24 19:48:10.440607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:29.072 [2024-04-24 19:48:10.440675] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:29.072 [2024-04-24 19:48:10.441087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.441264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.441291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd14630 with addr=10.0.0.2, port=4420 00:17:29.072 [2024-04-24 19:48:10.441309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd14630 is same with the state(5) to be set 00:17:29.072 [2024-04-24 19:48:10.441338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe92d80 (9): Bad file descriptor 00:17:29.072 [2024-04-24 19:48:10.441359] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:29.072 [2024-04-24 19:48:10.441373] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:29.072 [2024-04-24 19:48:10.441390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:29.072 [2024-04-24 19:48:10.441419] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:29.072 [2024-04-24 19:48:10.441433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:29.072 [2024-04-24 19:48:10.441446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:29.072 [2024-04-24 19:48:10.441494] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.072 [2024-04-24 19:48:10.441521] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.072 [2024-04-24 19:48:10.441564] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.072 [2024-04-24 19:48:10.441587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd14630 (9): Bad file descriptor 00:17:29.072 [2024-04-24 19:48:10.441810] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.072 [2024-04-24 19:48:10.441840] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.072 [2024-04-24 19:48:10.442004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.442322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.442348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xccb390 with addr=10.0.0.2, port=4420 00:17:29.072 [2024-04-24 19:48:10.442365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccb390 is same with the state(5) to be set 00:17:29.072 [2024-04-24 19:48:10.442523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.442673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.442699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcec060 with addr=10.0.0.2, port=4420 00:17:29.072 [2024-04-24 19:48:10.442715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcec060 is same with the state(5) to be set 00:17:29.072 [2024-04-24 19:48:10.442860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.443014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.443038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd34f60 with addr=10.0.0.2, port=4420 00:17:29.072 [2024-04-24 19:48:10.443054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd34f60 is same with the state(5) to be set 00:17:29.072 [2024-04-24 19:48:10.443204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.443355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.072 [2024-04-24 19:48:10.443380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf7ac0 with addr=10.0.0.2, port=4420 00:17:29.072 [2024-04-24 19:48:10.443396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf7ac0 is same with the state(5) to be set 00:17:29.072 [2024-04-24 19:48:10.443413] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:29.072 [2024-04-24 19:48:10.443427] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:29.073 [2024-04-24 19:48:10.443440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:29.073 [2024-04-24 19:48:10.444095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.444974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.444988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.445004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.445017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.445038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.445052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.445069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.445083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.445099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.073 [2024-04-24 19:48:10.445112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.073 [2024-04-24 19:48:10.445128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.445979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.445995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.446009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.446025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.446038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.446053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc07c0 is same with the state(5) to be set 00:17:29.074 [2024-04-24 19:48:10.447329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.447353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.447375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.447391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.447406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.447420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.447441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.447456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.447472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.447485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.447501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.447514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.447530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.447543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.447559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.074 [2024-04-24 19:48:10.447573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.074 [2024-04-24 19:48:10.447588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.447985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.447998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.075 [2024-04-24 19:48:10.448830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.075 [2024-04-24 19:48:10.448845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.448859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.448875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.448888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.448903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.448917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.448936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.448950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.448966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.448980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.448996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.449009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.449026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.449040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.449056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.449069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.449085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.449099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.449114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.449128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.449144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.449158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.449173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.449187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.449202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.449215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.449231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.076 [2024-04-24 19:48:10.449244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.076 [2024-04-24 19:48:10.449258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc1c70 is same with the state(5) to be set 00:17:29.076 [2024-04-24 19:48:10.451150] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.076 [2024-04-24 19:48:10.451182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:29.076 task offset: 20992 on job bdev=Nvme1n1 fails 00:17:29.076 00:17:29.076 Latency(us) 00:17:29.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.076 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Job: Nvme1n1 ended in about 0.88 seconds with error 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme1n1 : 0.88 145.70 9.11 72.85 0.00 289568.74 23981.32 273406.48 00:17:29.076 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Job: Nvme2n1 ended in about 0.88 seconds with error 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme2n1 : 0.88 217.05 13.57 72.35 0.00 214017.33 20194.80 240784.12 00:17:29.076 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Job: Nvme3n1 ended in about 0.89 seconds with error 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme3n1 : 0.89 143.35 8.96 71.68 0.00 282179.26 23690.05 268746.15 00:17:29.076 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Job: Nvme4n1 ended in about 0.90 seconds with error 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme4n1 : 0.90 142.84 8.93 71.42 0.00 277055.72 22816.24 260978.92 00:17:29.076 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Job: Nvme5n1 ended in about 0.88 seconds with error 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme5n1 : 0.88 145.14 9.07 72.57 0.00 266245.56 26796.94 296708.17 00:17:29.076 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Job: Nvme6n1 ended in about 0.92 seconds with error 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme6n1 : 0.92 139.76 8.73 69.88 0.00 271428.27 21554.06 270299.59 00:17:29.076 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Job: Nvme7n1 ended in about 0.92 seconds with error 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme7n1 : 0.92 139.27 8.70 69.64 0.00 266481.21 23884.23 265639.25 00:17:29.076 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme8n1 : 0.89 216.60 13.54 0.00 0.00 249479.02 22622.06 288940.94 00:17:29.076 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Job: Nvme9n1 ended in about 0.88 seconds with error 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme9n1 : 0.88 144.92 9.06 72.46 0.00 242726.94 34758.35 323116.75 00:17:29.076 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.076 Job: Nvme10n1 ended in about 0.91 seconds with error 00:17:29.076 Verification LBA range: start 0x0 length 0x400 00:17:29.076 Nvme10n1 : 0.91 140.87 8.80 70.44 0.00 245281.37 21748.24 268746.15 00:17:29.076 =================================================================================================================== 00:17:29.076 Total : 1575.50 98.47 643.28 0.00 258948.63 20194.80 323116.75 00:17:29.076 [2024-04-24 19:48:10.477782] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:29.076 [2024-04-24 19:48:10.477874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:17:29.076 [2024-04-24 19:48:10.477979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccb390 (9): Bad file descriptor 00:17:29.076 [2024-04-24 19:48:10.478020] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcec060 (9): Bad file descriptor 00:17:29.076 [2024-04-24 19:48:10.478042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd34f60 (9): Bad file descriptor 00:17:29.076 [2024-04-24 19:48:10.478060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf7ac0 (9): Bad file descriptor 00:17:29.076 [2024-04-24 19:48:10.478078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:29.076 [2024-04-24 19:48:10.478106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:29.076 [2024-04-24 19:48:10.478125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:29.076 [2024-04-24 19:48:10.478224] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.076 [2024-04-24 19:48:10.478251] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.076 [2024-04-24 19:48:10.478270] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.076 [2024-04-24 19:48:10.478291] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.076 [2024-04-24 19:48:10.478312] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.076 [2024-04-24 19:48:10.478458] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.076 [2024-04-24 19:48:10.478832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.076 [2024-04-24 19:48:10.478999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.076 [2024-04-24 19:48:10.479025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd17690 with addr=10.0.0.2, port=4420 00:17:29.076 [2024-04-24 19:48:10.479044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17690 is same with the state(5) to be set 00:17:29.076 [2024-04-24 19:48:10.479202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.076 [2024-04-24 19:48:10.479364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.076 [2024-04-24 19:48:10.479391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0af20 with addr=10.0.0.2, port=4420 00:17:29.076 [2024-04-24 19:48:10.479407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0af20 is same with the state(5) to be set 00:17:29.076 [2024-04-24 19:48:10.479423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:29.076 [2024-04-24 19:48:10.479436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:29.076 [2024-04-24 19:48:10.479450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:29.076 [2024-04-24 19:48:10.479468] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:29.076 [2024-04-24 19:48:10.479482] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:29.076 [2024-04-24 19:48:10.479496] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:29.076 [2024-04-24 19:48:10.479515] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:17:29.076 [2024-04-24 19:48:10.479530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:17:29.076 [2024-04-24 19:48:10.479542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:29.076 [2024-04-24 19:48:10.479558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:29.076 [2024-04-24 19:48:10.479573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:29.076 [2024-04-24 19:48:10.479586] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:29.077 [2024-04-24 19:48:10.479622] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.077 [2024-04-24 19:48:10.479653] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.077 [2024-04-24 19:48:10.479680] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.077 [2024-04-24 19:48:10.479707] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.077 [2024-04-24 19:48:10.479728] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.077 [2024-04-24 19:48:10.479746] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.077 [2024-04-24 19:48:10.479765] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:29.077 [2024-04-24 19:48:10.480377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:29.077 [2024-04-24 19:48:10.480404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:29.077 [2024-04-24 19:48:10.480422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:29.077 [2024-04-24 19:48:10.480468] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.077 [2024-04-24 19:48:10.480484] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.077 [2024-04-24 19:48:10.480497] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.077 [2024-04-24 19:48:10.480509] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.077 [2024-04-24 19:48:10.480546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd17690 (9): Bad file descriptor 00:17:29.077 [2024-04-24 19:48:10.480569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0af20 (9): Bad file descriptor 00:17:29.077 [2024-04-24 19:48:10.480833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.077 [2024-04-24 19:48:10.481008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.077 [2024-04-24 19:48:10.481034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe911d0 with addr=10.0.0.2, port=4420 00:17:29.077 [2024-04-24 19:48:10.481051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe911d0 is same with the state(5) to be set 00:17:29.077 [2024-04-24 19:48:10.481203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.077 [2024-04-24 19:48:10.481355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.077 [2024-04-24 19:48:10.481380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8b5880 with addr=10.0.0.2, port=4420 00:17:29.077 [2024-04-24 19:48:10.481396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b5880 is same with the state(5) to be set 00:17:29.077 [2024-04-24 19:48:10.481532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.077 [2024-04-24 19:48:10.481721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.077 [2024-04-24 19:48:10.481747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe92d80 with addr=10.0.0.2, port=4420 00:17:29.077 [2024-04-24 19:48:10.481764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe92d80 is same with the state(5) to be set 00:17:29.077 [2024-04-24 19:48:10.481779] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:17:29.077 [2024-04-24 19:48:10.481792] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:17:29.077 [2024-04-24 19:48:10.481806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:29.077 [2024-04-24 19:48:10.481824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:17:29.077 [2024-04-24 19:48:10.481838] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:17:29.077 [2024-04-24 19:48:10.481857] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:29.077 [2024-04-24 19:48:10.481902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:29.077 [2024-04-24 19:48:10.481935] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.077 [2024-04-24 19:48:10.481953] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.077 [2024-04-24 19:48:10.481979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe911d0 (9): Bad file descriptor 00:17:29.077 [2024-04-24 19:48:10.482001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b5880 (9): Bad file descriptor 00:17:29.077 [2024-04-24 19:48:10.482020] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe92d80 (9): Bad file descriptor 00:17:29.077 [2024-04-24 19:48:10.482194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.077 [2024-04-24 19:48:10.482359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:29.077 [2024-04-24 19:48:10.482385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd14630 with addr=10.0.0.2, port=4420 00:17:29.077 [2024-04-24 19:48:10.482400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd14630 is same with the state(5) to be set 00:17:29.077 [2024-04-24 19:48:10.482415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:29.077 [2024-04-24 19:48:10.482429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:29.077 [2024-04-24 19:48:10.482442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:29.077 [2024-04-24 19:48:10.482460] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:29.077 [2024-04-24 19:48:10.482474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:29.077 [2024-04-24 19:48:10.482487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:29.077 [2024-04-24 19:48:10.482502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:29.077 [2024-04-24 19:48:10.482516] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:29.077 [2024-04-24 19:48:10.482529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:29.077 [2024-04-24 19:48:10.482570] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.077 [2024-04-24 19:48:10.482589] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.077 [2024-04-24 19:48:10.482602] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.077 [2024-04-24 19:48:10.482618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd14630 (9): Bad file descriptor 00:17:29.077 [2024-04-24 19:48:10.482683] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:29.077 [2024-04-24 19:48:10.482704] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:29.077 [2024-04-24 19:48:10.482719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:29.077 [2024-04-24 19:48:10.482754] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:29.643 19:48:10 -- target/shutdown.sh@136 -- # nvmfpid= 00:17:29.643 19:48:10 -- target/shutdown.sh@139 -- # sleep 1 00:17:30.577 19:48:11 -- target/shutdown.sh@142 -- # kill -9 1724952 00:17:30.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1724952) - No such process 00:17:30.577 19:48:11 -- target/shutdown.sh@142 -- # true 00:17:30.577 19:48:11 -- target/shutdown.sh@144 -- # stoptarget 00:17:30.577 19:48:11 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:30.577 19:48:11 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:30.577 19:48:11 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:30.577 19:48:11 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:30.577 19:48:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:30.577 19:48:11 -- nvmf/common.sh@117 -- # sync 00:17:30.577 19:48:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.577 19:48:11 -- nvmf/common.sh@120 -- # set +e 00:17:30.577 19:48:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.577 19:48:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.577 rmmod nvme_tcp 00:17:30.577 rmmod nvme_fabrics 00:17:30.577 rmmod nvme_keyring 00:17:30.577 19:48:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.577 19:48:12 -- nvmf/common.sh@124 -- # set -e 00:17:30.577 19:48:12 -- nvmf/common.sh@125 -- # return 0 00:17:30.577 19:48:12 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:30.577 19:48:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:30.577 19:48:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:30.577 19:48:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:30.577 19:48:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.577 19:48:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.577 19:48:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.577 19:48:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.577 19:48:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.134 19:48:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:33.134 00:17:33.134 real 0m8.379s 00:17:33.134 user 0m21.974s 00:17:33.134 sys 0m1.525s 00:17:33.134 19:48:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:33.134 19:48:14 -- common/autotest_common.sh@10 -- # set +x 00:17:33.134 ************************************ 00:17:33.134 END TEST nvmf_shutdown_tc3 00:17:33.134 ************************************ 00:17:33.134 19:48:14 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:17:33.134 00:17:33.134 real 0m28.152s 00:17:33.134 user 1m18.435s 00:17:33.134 sys 0m6.584s 00:17:33.134 19:48:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:33.134 19:48:14 -- common/autotest_common.sh@10 -- # set +x 00:17:33.134 ************************************ 00:17:33.134 END TEST nvmf_shutdown 00:17:33.134 ************************************ 00:17:33.134 19:48:14 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:17:33.134 19:48:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:33.134 19:48:14 -- common/autotest_common.sh@10 -- # set +x 00:17:33.134 19:48:14 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:17:33.134 19:48:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:33.134 19:48:14 -- common/autotest_common.sh@10 -- # set +x 00:17:33.134 19:48:14 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:17:33.134 19:48:14 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:33.134 19:48:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:33.134 19:48:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.134 19:48:14 -- common/autotest_common.sh@10 -- # set +x 00:17:33.134 ************************************ 00:17:33.134 START TEST nvmf_multicontroller 00:17:33.134 ************************************ 00:17:33.134 19:48:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:33.134 * Looking for test storage... 00:17:33.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:33.134 19:48:14 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.134 19:48:14 -- nvmf/common.sh@7 -- # uname -s 00:17:33.134 19:48:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.134 19:48:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.134 19:48:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.134 19:48:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.134 19:48:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.134 19:48:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.134 19:48:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.134 19:48:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.134 19:48:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.134 19:48:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.134 19:48:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.134 19:48:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.134 19:48:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.134 19:48:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.134 19:48:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.134 19:48:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.134 19:48:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.134 19:48:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.134 19:48:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.134 19:48:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.134 19:48:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.134 19:48:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.134 19:48:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.134 19:48:14 -- paths/export.sh@5 -- # export PATH 00:17:33.135 19:48:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.135 19:48:14 -- nvmf/common.sh@47 -- # : 0 00:17:33.135 19:48:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.135 19:48:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.135 19:48:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.135 19:48:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.135 19:48:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.135 19:48:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.135 19:48:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.135 19:48:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.135 19:48:14 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.135 19:48:14 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.135 19:48:14 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:33.135 19:48:14 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:33.135 19:48:14 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:33.135 19:48:14 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:33.135 19:48:14 -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:33.135 19:48:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:33.135 19:48:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.135 19:48:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:33.135 19:48:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:33.135 19:48:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:33.135 19:48:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.135 19:48:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.135 19:48:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.135 19:48:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:33.135 19:48:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:33.135 19:48:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.135 19:48:14 -- common/autotest_common.sh@10 -- # set +x 00:17:35.038 19:48:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:35.038 19:48:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.038 19:48:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.038 19:48:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.038 19:48:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.038 19:48:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.038 19:48:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.038 19:48:16 -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.038 19:48:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.038 19:48:16 -- nvmf/common.sh@296 -- # e810=() 00:17:35.038 19:48:16 -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.038 19:48:16 -- nvmf/common.sh@297 -- # x722=() 00:17:35.038 19:48:16 -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.038 19:48:16 -- nvmf/common.sh@298 -- # mlx=() 00:17:35.038 19:48:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.038 19:48:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.038 19:48:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.038 19:48:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.038 19:48:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.038 19:48:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.038 19:48:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:35.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:35.038 19:48:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.038 19:48:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:35.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:35.038 19:48:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.038 19:48:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.038 19:48:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.038 19:48:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.038 19:48:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.038 19:48:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:35.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:35.038 19:48:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.038 19:48:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.038 19:48:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.038 19:48:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:35.038 19:48:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.038 19:48:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:35.038 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:35.038 19:48:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.038 19:48:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:35.038 19:48:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:35.038 19:48:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:35.038 19:48:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:35.038 19:48:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.038 19:48:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.038 19:48:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.038 19:48:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.038 19:48:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.038 19:48:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.038 19:48:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.038 19:48:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.038 19:48:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.038 19:48:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.038 19:48:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.038 19:48:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.038 19:48:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.038 19:48:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.038 19:48:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.038 19:48:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.038 19:48:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.038 19:48:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.038 19:48:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.038 19:48:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:17:35.038 00:17:35.038 --- 10.0.0.2 ping statistics --- 00:17:35.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.038 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:17:35.038 19:48:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:17:35.038 00:17:35.038 --- 10.0.0.1 ping statistics --- 00:17:35.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.039 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:35.039 19:48:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.039 19:48:16 -- nvmf/common.sh@411 -- # return 0 00:17:35.039 19:48:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.039 19:48:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.039 19:48:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.039 19:48:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.039 19:48:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.039 19:48:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.039 19:48:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.039 19:48:16 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:35.039 19:48:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:35.039 19:48:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.039 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.039 19:48:16 -- nvmf/common.sh@470 -- # nvmfpid=1727787 00:17:35.039 19:48:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:35.039 19:48:16 -- nvmf/common.sh@471 -- # waitforlisten 1727787 00:17:35.039 19:48:16 -- common/autotest_common.sh@817 -- # '[' -z 1727787 ']' 00:17:35.039 19:48:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.039 19:48:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.039 19:48:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.039 19:48:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.039 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.298 [2024-04-24 19:48:16.568788] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:35.298 [2024-04-24 19:48:16.568871] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.298 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.298 [2024-04-24 19:48:16.632409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:35.298 [2024-04-24 19:48:16.739385] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.298 [2024-04-24 19:48:16.739465] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.298 [2024-04-24 19:48:16.739479] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.298 [2024-04-24 19:48:16.739490] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.298 [2024-04-24 19:48:16.739499] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.298 [2024-04-24 19:48:16.739590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.298 [2024-04-24 19:48:16.739664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.298 [2024-04-24 19:48:16.739669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.556 19:48:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:35.556 19:48:16 -- common/autotest_common.sh@850 -- # return 0 00:17:35.556 19:48:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:35.556 19:48:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:35.556 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.556 19:48:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.556 19:48:16 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:35.556 19:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.556 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.556 [2024-04-24 19:48:16.891813] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.556 19:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.556 19:48:16 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:35.556 19:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.556 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.556 Malloc0 00:17:35.556 19:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.556 19:48:16 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:35.556 19:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.556 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.556 19:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.556 19:48:16 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:35.556 19:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.556 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.556 19:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.556 19:48:16 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.556 19:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.556 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.556 [2024-04-24 19:48:16.958828] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.556 19:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.556 19:48:16 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:35.556 19:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.556 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.556 [2024-04-24 19:48:16.966731] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:35.556 19:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.556 19:48:16 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:35.556 19:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.556 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.557 Malloc1 00:17:35.557 19:48:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.557 19:48:16 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:35.557 19:48:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.557 19:48:16 -- common/autotest_common.sh@10 -- # set +x 00:17:35.557 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.557 19:48:17 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:35.557 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.557 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:35.557 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.557 19:48:17 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:35.557 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.557 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:35.557 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.557 19:48:17 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:35.557 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.557 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:35.557 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.557 19:48:17 -- host/multicontroller.sh@44 -- # bdevperf_pid=1727844 00:17:35.557 19:48:17 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:35.557 19:48:17 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:35.557 19:48:17 -- host/multicontroller.sh@47 -- # waitforlisten 1727844 /var/tmp/bdevperf.sock 00:17:35.557 19:48:17 -- common/autotest_common.sh@817 -- # '[' -z 1727844 ']' 00:17:35.557 19:48:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.557 19:48:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.557 19:48:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.557 19:48:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.557 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.123 19:48:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.123 19:48:17 -- common/autotest_common.sh@850 -- # return 0 00:17:36.123 19:48:17 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:36.123 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.123 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.123 NVMe0n1 00:17:36.123 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.123 19:48:17 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:36.123 19:48:17 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:36.123 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.123 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.123 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.123 1 00:17:36.123 19:48:17 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:36.123 19:48:17 -- common/autotest_common.sh@638 -- # local es=0 00:17:36.123 19:48:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:36.123 19:48:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:36.123 19:48:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.123 19:48:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:36.123 19:48:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.123 19:48:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:36.123 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.123 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.123 request: 00:17:36.123 { 00:17:36.123 "name": "NVMe0", 00:17:36.123 "trtype": "tcp", 00:17:36.123 "traddr": "10.0.0.2", 00:17:36.123 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:36.123 "hostaddr": "10.0.0.2", 00:17:36.123 "hostsvcid": "60000", 00:17:36.123 "adrfam": "ipv4", 00:17:36.123 "trsvcid": "4420", 00:17:36.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.123 "method": "bdev_nvme_attach_controller", 00:17:36.123 "req_id": 1 00:17:36.123 } 00:17:36.123 Got JSON-RPC error response 00:17:36.123 response: 00:17:36.123 { 00:17:36.123 "code": -114, 00:17:36.123 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:36.123 } 00:17:36.123 19:48:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:36.123 19:48:17 -- common/autotest_common.sh@641 -- # es=1 00:17:36.123 19:48:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:36.123 19:48:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:36.123 19:48:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:36.123 19:48:17 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:36.123 19:48:17 -- common/autotest_common.sh@638 -- # local es=0 00:17:36.123 19:48:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:36.123 19:48:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:36.123 19:48:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.123 19:48:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:36.123 19:48:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.123 19:48:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:36.124 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.124 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.382 request: 00:17:36.382 { 00:17:36.382 "name": "NVMe0", 00:17:36.382 "trtype": "tcp", 00:17:36.382 "traddr": "10.0.0.2", 00:17:36.382 "hostaddr": "10.0.0.2", 00:17:36.382 "hostsvcid": "60000", 00:17:36.382 "adrfam": "ipv4", 00:17:36.382 "trsvcid": "4420", 00:17:36.382 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:36.382 "method": "bdev_nvme_attach_controller", 00:17:36.382 "req_id": 1 00:17:36.382 } 00:17:36.382 Got JSON-RPC error response 00:17:36.382 response: 00:17:36.382 { 00:17:36.382 "code": -114, 00:17:36.382 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:36.382 } 00:17:36.382 19:48:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:36.382 19:48:17 -- common/autotest_common.sh@641 -- # es=1 00:17:36.382 19:48:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:36.382 19:48:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:36.382 19:48:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:36.382 19:48:17 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:36.382 19:48:17 -- common/autotest_common.sh@638 -- # local es=0 00:17:36.382 19:48:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:36.382 19:48:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:36.382 19:48:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.382 19:48:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:36.382 19:48:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.382 19:48:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:36.382 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.382 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.382 request: 00:17:36.382 { 00:17:36.382 "name": "NVMe0", 00:17:36.382 "trtype": "tcp", 00:17:36.382 "traddr": "10.0.0.2", 00:17:36.382 "hostaddr": "10.0.0.2", 00:17:36.382 "hostsvcid": "60000", 00:17:36.382 "adrfam": "ipv4", 00:17:36.382 "trsvcid": "4420", 00:17:36.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.382 "multipath": "disable", 00:17:36.382 "method": "bdev_nvme_attach_controller", 00:17:36.382 "req_id": 1 00:17:36.382 } 00:17:36.382 Got JSON-RPC error response 00:17:36.382 response: 00:17:36.382 { 00:17:36.382 "code": -114, 00:17:36.382 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:17:36.382 } 00:17:36.382 19:48:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:36.382 19:48:17 -- common/autotest_common.sh@641 -- # es=1 00:17:36.382 19:48:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:36.382 19:48:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:36.382 19:48:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:36.382 19:48:17 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:36.382 19:48:17 -- common/autotest_common.sh@638 -- # local es=0 00:17:36.382 19:48:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:36.382 19:48:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:36.382 19:48:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.382 19:48:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:36.382 19:48:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.382 19:48:17 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:36.382 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.382 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.382 request: 00:17:36.382 { 00:17:36.382 "name": "NVMe0", 00:17:36.382 "trtype": "tcp", 00:17:36.382 "traddr": "10.0.0.2", 00:17:36.382 "hostaddr": "10.0.0.2", 00:17:36.382 "hostsvcid": "60000", 00:17:36.382 "adrfam": "ipv4", 00:17:36.382 "trsvcid": "4420", 00:17:36.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.382 "multipath": "failover", 00:17:36.382 "method": "bdev_nvme_attach_controller", 00:17:36.382 "req_id": 1 00:17:36.382 } 00:17:36.382 Got JSON-RPC error response 00:17:36.382 response: 00:17:36.382 { 00:17:36.382 "code": -114, 00:17:36.382 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:36.382 } 00:17:36.382 19:48:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:36.382 19:48:17 -- common/autotest_common.sh@641 -- # es=1 00:17:36.382 19:48:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:36.382 19:48:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:36.382 19:48:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:36.382 19:48:17 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:36.382 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.382 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.382 00:17:36.382 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.382 19:48:17 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:36.382 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.382 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.382 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.383 19:48:17 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:36.383 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.383 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.641 00:17:36.641 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.641 19:48:17 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:36.641 19:48:17 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:36.641 19:48:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.641 19:48:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.641 19:48:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.641 19:48:17 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:36.641 19:48:17 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:37.575 0 00:17:37.575 19:48:19 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:37.575 19:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.575 19:48:19 -- common/autotest_common.sh@10 -- # set +x 00:17:37.833 19:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.833 19:48:19 -- host/multicontroller.sh@100 -- # killprocess 1727844 00:17:37.833 19:48:19 -- common/autotest_common.sh@936 -- # '[' -z 1727844 ']' 00:17:37.833 19:48:19 -- common/autotest_common.sh@940 -- # kill -0 1727844 00:17:37.833 19:48:19 -- common/autotest_common.sh@941 -- # uname 00:17:37.833 19:48:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.833 19:48:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1727844 00:17:37.833 19:48:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:37.833 19:48:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:37.833 19:48:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1727844' 00:17:37.833 killing process with pid 1727844 00:17:37.833 19:48:19 -- common/autotest_common.sh@955 -- # kill 1727844 00:17:37.833 19:48:19 -- common/autotest_common.sh@960 -- # wait 1727844 00:17:38.091 19:48:19 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.092 19:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.092 19:48:19 -- common/autotest_common.sh@10 -- # set +x 00:17:38.092 19:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.092 19:48:19 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:38.092 19:48:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.092 19:48:19 -- common/autotest_common.sh@10 -- # set +x 00:17:38.092 19:48:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.092 19:48:19 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:38.092 19:48:19 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:38.092 19:48:19 -- common/autotest_common.sh@1598 -- # read -r file 00:17:38.092 19:48:19 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:17:38.092 19:48:19 -- common/autotest_common.sh@1597 -- # sort -u 00:17:38.092 19:48:19 -- common/autotest_common.sh@1599 -- # cat 00:17:38.092 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:38.092 [2024-04-24 19:48:17.075778] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:38.092 [2024-04-24 19:48:17.075876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727844 ] 00:17:38.092 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.092 [2024-04-24 19:48:17.140086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.092 [2024-04-24 19:48:17.248469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.092 [2024-04-24 19:48:17.926793] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 58b00ded-eb55-4858-b520-f37ad63a2a95 already exists 00:17:38.092 [2024-04-24 19:48:17.926841] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:58b00ded-eb55-4858-b520-f37ad63a2a95 alias for bdev NVMe1n1 00:17:38.092 [2024-04-24 19:48:17.926862] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:38.092 Running I/O for 1 seconds... 00:17:38.092 00:17:38.092 Latency(us) 00:17:38.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.092 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:38.092 NVMe0n1 : 1.01 18834.98 73.57 0.00 0.00 6777.80 4223.43 14466.47 00:17:38.092 =================================================================================================================== 00:17:38.092 Total : 18834.98 73.57 0.00 0.00 6777.80 4223.43 14466.47 00:17:38.092 Received shutdown signal, test time was about 1.000000 seconds 00:17:38.092 00:17:38.092 Latency(us) 00:17:38.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.092 =================================================================================================================== 00:17:38.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:38.092 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:38.092 19:48:19 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:38.092 19:48:19 -- common/autotest_common.sh@1598 -- # read -r file 00:17:38.092 19:48:19 -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:38.092 19:48:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:38.092 19:48:19 -- nvmf/common.sh@117 -- # sync 00:17:38.092 19:48:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:38.092 19:48:19 -- nvmf/common.sh@120 -- # set +e 00:17:38.092 19:48:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:38.092 19:48:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:38.092 rmmod nvme_tcp 00:17:38.092 rmmod nvme_fabrics 00:17:38.092 rmmod nvme_keyring 00:17:38.092 19:48:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:38.092 19:48:19 -- nvmf/common.sh@124 -- # set -e 00:17:38.092 19:48:19 -- nvmf/common.sh@125 -- # return 0 00:17:38.092 19:48:19 -- nvmf/common.sh@478 -- # '[' -n 1727787 ']' 00:17:38.092 19:48:19 -- nvmf/common.sh@479 -- # killprocess 1727787 00:17:38.092 19:48:19 -- common/autotest_common.sh@936 -- # '[' -z 1727787 ']' 00:17:38.092 19:48:19 -- common/autotest_common.sh@940 -- # kill -0 1727787 00:17:38.092 19:48:19 -- common/autotest_common.sh@941 -- # uname 00:17:38.092 19:48:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.092 19:48:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1727787 00:17:38.092 19:48:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:38.092 19:48:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:38.092 19:48:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1727787' 00:17:38.092 killing process with pid 1727787 00:17:38.092 19:48:19 -- common/autotest_common.sh@955 -- # kill 1727787 00:17:38.092 19:48:19 -- common/autotest_common.sh@960 -- # wait 1727787 00:17:38.353 19:48:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:38.353 19:48:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:38.353 19:48:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:38.353 19:48:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.353 19:48:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.353 19:48:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.353 19:48:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.353 19:48:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.893 19:48:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.893 00:17:40.893 real 0m7.577s 00:17:40.893 user 0m12.105s 00:17:40.893 sys 0m2.340s 00:17:40.893 19:48:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:40.893 19:48:21 -- common/autotest_common.sh@10 -- # set +x 00:17:40.893 ************************************ 00:17:40.893 END TEST nvmf_multicontroller 00:17:40.893 ************************************ 00:17:40.893 19:48:21 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:40.893 19:48:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:40.893 19:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:40.893 19:48:21 -- common/autotest_common.sh@10 -- # set +x 00:17:40.893 ************************************ 00:17:40.893 START TEST nvmf_aer 00:17:40.893 ************************************ 00:17:40.893 19:48:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:40.893 * Looking for test storage... 00:17:40.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:40.893 19:48:22 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.893 19:48:22 -- nvmf/common.sh@7 -- # uname -s 00:17:40.893 19:48:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.893 19:48:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.893 19:48:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.893 19:48:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.893 19:48:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.893 19:48:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.893 19:48:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.893 19:48:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.893 19:48:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.893 19:48:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.893 19:48:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.893 19:48:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.893 19:48:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.893 19:48:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.893 19:48:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.893 19:48:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.893 19:48:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.893 19:48:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.893 19:48:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.893 19:48:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.893 19:48:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.893 19:48:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.893 19:48:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.893 19:48:22 -- paths/export.sh@5 -- # export PATH 00:17:40.893 19:48:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.893 19:48:22 -- nvmf/common.sh@47 -- # : 0 00:17:40.893 19:48:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.893 19:48:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.893 19:48:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.893 19:48:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.893 19:48:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.893 19:48:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.893 19:48:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.893 19:48:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.893 19:48:22 -- host/aer.sh@11 -- # nvmftestinit 00:17:40.893 19:48:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:40.893 19:48:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.893 19:48:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:40.893 19:48:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:40.893 19:48:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:40.893 19:48:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.893 19:48:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.893 19:48:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.893 19:48:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:40.893 19:48:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:40.893 19:48:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.893 19:48:22 -- common/autotest_common.sh@10 -- # set +x 00:17:42.795 19:48:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:42.795 19:48:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.795 19:48:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.795 19:48:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.795 19:48:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.795 19:48:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.795 19:48:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.795 19:48:23 -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.795 19:48:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.795 19:48:23 -- nvmf/common.sh@296 -- # e810=() 00:17:42.795 19:48:23 -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.795 19:48:23 -- nvmf/common.sh@297 -- # x722=() 00:17:42.795 19:48:23 -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.795 19:48:23 -- nvmf/common.sh@298 -- # mlx=() 00:17:42.795 19:48:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.795 19:48:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.795 19:48:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.795 19:48:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:42.795 19:48:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.795 19:48:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.795 19:48:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:42.795 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:42.795 19:48:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.795 19:48:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:42.795 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:42.795 19:48:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.795 19:48:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.795 19:48:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.795 19:48:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:42.795 19:48:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.795 19:48:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:42.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:42.795 19:48:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.795 19:48:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.795 19:48:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.795 19:48:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:42.795 19:48:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.795 19:48:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:42.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:42.795 19:48:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.795 19:48:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:42.795 19:48:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:42.795 19:48:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:42.795 19:48:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.795 19:48:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.795 19:48:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.795 19:48:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:42.795 19:48:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.795 19:48:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.795 19:48:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:42.795 19:48:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.795 19:48:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.795 19:48:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:42.795 19:48:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:42.795 19:48:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.795 19:48:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.795 19:48:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.795 19:48:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.795 19:48:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:42.795 19:48:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.795 19:48:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.795 19:48:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.795 19:48:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:17:42.795 00:17:42.795 --- 10.0.0.2 ping statistics --- 00:17:42.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.795 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:17:42.795 19:48:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:17:42.795 00:17:42.795 --- 10.0.0.1 ping statistics --- 00:17:42.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.795 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:42.795 19:48:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.795 19:48:23 -- nvmf/common.sh@411 -- # return 0 00:17:42.795 19:48:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:42.795 19:48:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.795 19:48:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:42.795 19:48:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.795 19:48:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:42.795 19:48:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:42.795 19:48:24 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:42.795 19:48:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:42.795 19:48:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:42.795 19:48:24 -- common/autotest_common.sh@10 -- # set +x 00:17:42.795 19:48:24 -- nvmf/common.sh@470 -- # nvmfpid=1730152 00:17:42.795 19:48:24 -- nvmf/common.sh@471 -- # waitforlisten 1730152 00:17:42.795 19:48:24 -- common/autotest_common.sh@817 -- # '[' -z 1730152 ']' 00:17:42.795 19:48:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:42.795 19:48:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.795 19:48:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.795 19:48:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.795 19:48:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.795 19:48:24 -- common/autotest_common.sh@10 -- # set +x 00:17:42.795 [2024-04-24 19:48:24.055866] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:42.796 [2024-04-24 19:48:24.055953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.796 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.796 [2024-04-24 19:48:24.125735] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:42.796 [2024-04-24 19:48:24.242665] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.796 [2024-04-24 19:48:24.242733] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.796 [2024-04-24 19:48:24.242756] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.796 [2024-04-24 19:48:24.242769] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.796 [2024-04-24 19:48:24.242781] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.796 [2024-04-24 19:48:24.242852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.796 [2024-04-24 19:48:24.242909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.796 [2024-04-24 19:48:24.243022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.796 [2024-04-24 19:48:24.243025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.728 19:48:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.728 19:48:24 -- common/autotest_common.sh@850 -- # return 0 00:17:43.728 19:48:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:43.728 19:48:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:43.728 19:48:24 -- common/autotest_common.sh@10 -- # set +x 00:17:43.728 19:48:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.728 19:48:24 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.728 19:48:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.728 19:48:24 -- common/autotest_common.sh@10 -- # set +x 00:17:43.728 [2024-04-24 19:48:25.001304] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.728 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.728 19:48:25 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:43.728 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.728 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.728 Malloc0 00:17:43.728 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.728 19:48:25 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:43.728 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.728 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.728 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.729 19:48:25 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.729 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.729 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.729 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.729 19:48:25 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.729 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.729 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.729 [2024-04-24 19:48:25.053434] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.729 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.729 19:48:25 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:43.729 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.729 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.729 [2024-04-24 19:48:25.061189] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:43.729 [ 00:17:43.729 { 00:17:43.729 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:43.729 "subtype": "Discovery", 00:17:43.729 "listen_addresses": [], 00:17:43.729 "allow_any_host": true, 00:17:43.729 "hosts": [] 00:17:43.729 }, 00:17:43.729 { 00:17:43.729 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.729 "subtype": "NVMe", 00:17:43.729 "listen_addresses": [ 00:17:43.729 { 00:17:43.729 "transport": "TCP", 00:17:43.729 "trtype": "TCP", 00:17:43.729 "adrfam": "IPv4", 00:17:43.729 "traddr": "10.0.0.2", 00:17:43.729 "trsvcid": "4420" 00:17:43.729 } 00:17:43.729 ], 00:17:43.729 "allow_any_host": true, 00:17:43.729 "hosts": [], 00:17:43.729 "serial_number": "SPDK00000000000001", 00:17:43.729 "model_number": "SPDK bdev Controller", 00:17:43.729 "max_namespaces": 2, 00:17:43.729 "min_cntlid": 1, 00:17:43.729 "max_cntlid": 65519, 00:17:43.729 "namespaces": [ 00:17:43.729 { 00:17:43.729 "nsid": 1, 00:17:43.729 "bdev_name": "Malloc0", 00:17:43.729 "name": "Malloc0", 00:17:43.729 "nguid": "6E350A35970842559D17ECFBE3329C46", 00:17:43.729 "uuid": "6e350a35-9708-4255-9d17-ecfbe3329c46" 00:17:43.729 } 00:17:43.729 ] 00:17:43.729 } 00:17:43.729 ] 00:17:43.729 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.729 19:48:25 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:43.729 19:48:25 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:43.729 19:48:25 -- host/aer.sh@33 -- # aerpid=1730304 00:17:43.729 19:48:25 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:43.729 19:48:25 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:43.729 19:48:25 -- common/autotest_common.sh@1251 -- # local i=0 00:17:43.729 19:48:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:43.729 19:48:25 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:17:43.729 19:48:25 -- common/autotest_common.sh@1254 -- # i=1 00:17:43.729 19:48:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:43.729 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.729 19:48:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:43.729 19:48:25 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:17:43.729 19:48:25 -- common/autotest_common.sh@1254 -- # i=2 00:17:43.729 19:48:25 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:17:43.987 19:48:25 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:43.987 19:48:25 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:43.987 19:48:25 -- common/autotest_common.sh@1262 -- # return 0 00:17:43.987 19:48:25 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:43.987 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.987 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.987 Malloc1 00:17:43.987 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.987 19:48:25 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:43.987 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.987 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.987 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.987 19:48:25 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:43.987 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.987 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.987 Asynchronous Event Request test 00:17:43.987 Attaching to 10.0.0.2 00:17:43.987 Attached to 10.0.0.2 00:17:43.987 Registering asynchronous event callbacks... 00:17:43.987 Starting namespace attribute notice tests for all controllers... 00:17:43.987 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:43.987 aer_cb - Changed Namespace 00:17:43.987 Cleaning up... 00:17:43.987 [ 00:17:43.987 { 00:17:43.987 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:43.987 "subtype": "Discovery", 00:17:43.987 "listen_addresses": [], 00:17:43.987 "allow_any_host": true, 00:17:43.987 "hosts": [] 00:17:43.987 }, 00:17:43.987 { 00:17:43.987 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.987 "subtype": "NVMe", 00:17:43.987 "listen_addresses": [ 00:17:43.987 { 00:17:43.987 "transport": "TCP", 00:17:43.987 "trtype": "TCP", 00:17:43.987 "adrfam": "IPv4", 00:17:43.987 "traddr": "10.0.0.2", 00:17:43.987 "trsvcid": "4420" 00:17:43.987 } 00:17:43.987 ], 00:17:43.987 "allow_any_host": true, 00:17:43.987 "hosts": [], 00:17:43.987 "serial_number": "SPDK00000000000001", 00:17:43.987 "model_number": "SPDK bdev Controller", 00:17:43.987 "max_namespaces": 2, 00:17:43.987 "min_cntlid": 1, 00:17:43.987 "max_cntlid": 65519, 00:17:43.987 "namespaces": [ 00:17:43.987 { 00:17:43.987 "nsid": 1, 00:17:43.987 "bdev_name": "Malloc0", 00:17:43.987 "name": "Malloc0", 00:17:43.987 "nguid": "6E350A35970842559D17ECFBE3329C46", 00:17:43.987 "uuid": "6e350a35-9708-4255-9d17-ecfbe3329c46" 00:17:43.987 }, 00:17:43.987 { 00:17:43.987 "nsid": 2, 00:17:43.987 "bdev_name": "Malloc1", 00:17:43.987 "name": "Malloc1", 00:17:43.987 "nguid": "3E8A8D396AB44BF29547161C67BEA3D0", 00:17:43.987 "uuid": "3e8a8d39-6ab4-4bf2-9547-161c67bea3d0" 00:17:43.987 } 00:17:43.987 ] 00:17:43.987 } 00:17:43.987 ] 00:17:43.987 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.987 19:48:25 -- host/aer.sh@43 -- # wait 1730304 00:17:43.987 19:48:25 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:43.988 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.988 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.988 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.988 19:48:25 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:43.988 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.988 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.988 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.988 19:48:25 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.988 19:48:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.988 19:48:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.988 19:48:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.988 19:48:25 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:43.988 19:48:25 -- host/aer.sh@51 -- # nvmftestfini 00:17:43.988 19:48:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:43.988 19:48:25 -- nvmf/common.sh@117 -- # sync 00:17:43.988 19:48:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.988 19:48:25 -- nvmf/common.sh@120 -- # set +e 00:17:43.988 19:48:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.988 19:48:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.988 rmmod nvme_tcp 00:17:43.988 rmmod nvme_fabrics 00:17:43.988 rmmod nvme_keyring 00:17:43.988 19:48:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.988 19:48:25 -- nvmf/common.sh@124 -- # set -e 00:17:43.988 19:48:25 -- nvmf/common.sh@125 -- # return 0 00:17:43.988 19:48:25 -- nvmf/common.sh@478 -- # '[' -n 1730152 ']' 00:17:43.988 19:48:25 -- nvmf/common.sh@479 -- # killprocess 1730152 00:17:43.988 19:48:25 -- common/autotest_common.sh@936 -- # '[' -z 1730152 ']' 00:17:43.988 19:48:25 -- common/autotest_common.sh@940 -- # kill -0 1730152 00:17:43.988 19:48:25 -- common/autotest_common.sh@941 -- # uname 00:17:43.988 19:48:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.988 19:48:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1730152 00:17:44.246 19:48:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:44.246 19:48:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:44.246 19:48:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1730152' 00:17:44.246 killing process with pid 1730152 00:17:44.246 19:48:25 -- common/autotest_common.sh@955 -- # kill 1730152 00:17:44.246 [2024-04-24 19:48:25.514790] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:44.246 19:48:25 -- common/autotest_common.sh@960 -- # wait 1730152 00:17:44.504 19:48:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:44.504 19:48:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:44.504 19:48:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:44.504 19:48:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.504 19:48:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.504 19:48:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.504 19:48:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.504 19:48:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.408 19:48:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.408 00:17:46.408 real 0m5.863s 00:17:46.408 user 0m6.784s 00:17:46.408 sys 0m1.835s 00:17:46.409 19:48:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:46.409 19:48:27 -- common/autotest_common.sh@10 -- # set +x 00:17:46.409 ************************************ 00:17:46.409 END TEST nvmf_aer 00:17:46.409 ************************************ 00:17:46.409 19:48:27 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:46.409 19:48:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:46.409 19:48:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:46.409 19:48:27 -- common/autotest_common.sh@10 -- # set +x 00:17:46.668 ************************************ 00:17:46.668 START TEST nvmf_async_init 00:17:46.668 ************************************ 00:17:46.668 19:48:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:46.668 * Looking for test storage... 00:17:46.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:46.668 19:48:28 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.668 19:48:28 -- nvmf/common.sh@7 -- # uname -s 00:17:46.668 19:48:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.668 19:48:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.668 19:48:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.668 19:48:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.668 19:48:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.668 19:48:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.668 19:48:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.668 19:48:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.668 19:48:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.668 19:48:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.668 19:48:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.668 19:48:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:46.668 19:48:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.668 19:48:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.668 19:48:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:46.668 19:48:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.668 19:48:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:46.668 19:48:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.668 19:48:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.668 19:48:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.668 19:48:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.668 19:48:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.668 19:48:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.668 19:48:28 -- paths/export.sh@5 -- # export PATH 00:17:46.668 19:48:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.668 19:48:28 -- nvmf/common.sh@47 -- # : 0 00:17:46.668 19:48:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.668 19:48:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.668 19:48:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.668 19:48:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.668 19:48:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.668 19:48:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.668 19:48:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.668 19:48:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.668 19:48:28 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:46.668 19:48:28 -- host/async_init.sh@14 -- # null_block_size=512 00:17:46.668 19:48:28 -- host/async_init.sh@15 -- # null_bdev=null0 00:17:46.668 19:48:28 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:46.668 19:48:28 -- host/async_init.sh@20 -- # uuidgen 00:17:46.668 19:48:28 -- host/async_init.sh@20 -- # tr -d - 00:17:46.668 19:48:28 -- host/async_init.sh@20 -- # nguid=7b8125e5006548bd924885a076461fa5 00:17:46.668 19:48:28 -- host/async_init.sh@22 -- # nvmftestinit 00:17:46.668 19:48:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:46.668 19:48:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.668 19:48:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:46.668 19:48:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:46.668 19:48:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:46.668 19:48:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.668 19:48:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.668 19:48:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.668 19:48:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:46.668 19:48:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:46.668 19:48:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:46.668 19:48:28 -- common/autotest_common.sh@10 -- # set +x 00:17:48.570 19:48:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:48.570 19:48:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:48.570 19:48:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:48.570 19:48:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:48.570 19:48:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:48.570 19:48:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:48.570 19:48:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:48.570 19:48:29 -- nvmf/common.sh@295 -- # net_devs=() 00:17:48.570 19:48:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:48.570 19:48:29 -- nvmf/common.sh@296 -- # e810=() 00:17:48.570 19:48:29 -- nvmf/common.sh@296 -- # local -ga e810 00:17:48.570 19:48:29 -- nvmf/common.sh@297 -- # x722=() 00:17:48.570 19:48:29 -- nvmf/common.sh@297 -- # local -ga x722 00:17:48.570 19:48:29 -- nvmf/common.sh@298 -- # mlx=() 00:17:48.570 19:48:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:48.570 19:48:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.570 19:48:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:48.570 19:48:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:48.570 19:48:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:48.570 19:48:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.570 19:48:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:48.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:48.570 19:48:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.570 19:48:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:48.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:48.570 19:48:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:48.570 19:48:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.570 19:48:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.570 19:48:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:48.570 19:48:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.570 19:48:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:48.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:48.570 19:48:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.570 19:48:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.570 19:48:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.570 19:48:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:48.570 19:48:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.570 19:48:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:48.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:48.570 19:48:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.570 19:48:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:48.570 19:48:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:48.570 19:48:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:48.570 19:48:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:48.570 19:48:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.570 19:48:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.570 19:48:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.570 19:48:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:48.570 19:48:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.570 19:48:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.570 19:48:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:48.570 19:48:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.570 19:48:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.570 19:48:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:48.570 19:48:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:48.570 19:48:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.570 19:48:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.570 19:48:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.570 19:48:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.570 19:48:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:48.570 19:48:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.571 19:48:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.571 19:48:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.571 19:48:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:48.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:17:48.571 00:17:48.571 --- 10.0.0.2 ping statistics --- 00:17:48.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.571 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:17:48.571 19:48:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:17:48.571 00:17:48.571 --- 10.0.0.1 ping statistics --- 00:17:48.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.571 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:17:48.571 19:48:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.571 19:48:30 -- nvmf/common.sh@411 -- # return 0 00:17:48.571 19:48:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:48.571 19:48:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.571 19:48:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:48.571 19:48:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:48.571 19:48:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.571 19:48:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:48.571 19:48:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:48.830 19:48:30 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:48.830 19:48:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:48.830 19:48:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:48.830 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:48.830 19:48:30 -- nvmf/common.sh@470 -- # nvmfpid=1732252 00:17:48.830 19:48:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:48.830 19:48:30 -- nvmf/common.sh@471 -- # waitforlisten 1732252 00:17:48.830 19:48:30 -- common/autotest_common.sh@817 -- # '[' -z 1732252 ']' 00:17:48.830 19:48:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.830 19:48:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:48.830 19:48:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.830 19:48:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:48.830 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:48.830 [2024-04-24 19:48:30.135420] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:48.830 [2024-04-24 19:48:30.135497] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.830 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.830 [2024-04-24 19:48:30.204439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.830 [2024-04-24 19:48:30.320006] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.830 [2024-04-24 19:48:30.320074] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.830 [2024-04-24 19:48:30.320087] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.830 [2024-04-24 19:48:30.320099] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.830 [2024-04-24 19:48:30.320109] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.830 [2024-04-24 19:48:30.320138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.089 19:48:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:49.089 19:48:30 -- common/autotest_common.sh@850 -- # return 0 00:17:49.089 19:48:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:49.089 19:48:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:49.089 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 19:48:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.089 19:48:30 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:49.089 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.089 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 [2024-04-24 19:48:30.470758] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.089 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.089 19:48:30 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:49.089 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.089 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 null0 00:17:49.089 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.089 19:48:30 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:49.089 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.089 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.089 19:48:30 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:49.089 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.089 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.089 19:48:30 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7b8125e5006548bd924885a076461fa5 00:17:49.089 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.089 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.089 19:48:30 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:49.089 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.089 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.089 [2024-04-24 19:48:30.511033] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.089 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.089 19:48:30 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:49.089 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.089 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.347 nvme0n1 00:17:49.347 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.347 19:48:30 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:49.347 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.347 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.347 [ 00:17:49.347 { 00:17:49.347 "name": "nvme0n1", 00:17:49.347 "aliases": [ 00:17:49.347 "7b8125e5-0065-48bd-9248-85a076461fa5" 00:17:49.347 ], 00:17:49.347 "product_name": "NVMe disk", 00:17:49.347 "block_size": 512, 00:17:49.347 "num_blocks": 2097152, 00:17:49.347 "uuid": "7b8125e5-0065-48bd-9248-85a076461fa5", 00:17:49.347 "assigned_rate_limits": { 00:17:49.347 "rw_ios_per_sec": 0, 00:17:49.347 "rw_mbytes_per_sec": 0, 00:17:49.347 "r_mbytes_per_sec": 0, 00:17:49.347 "w_mbytes_per_sec": 0 00:17:49.347 }, 00:17:49.347 "claimed": false, 00:17:49.347 "zoned": false, 00:17:49.347 "supported_io_types": { 00:17:49.347 "read": true, 00:17:49.347 "write": true, 00:17:49.347 "unmap": false, 00:17:49.347 "write_zeroes": true, 00:17:49.347 "flush": true, 00:17:49.347 "reset": true, 00:17:49.347 "compare": true, 00:17:49.347 "compare_and_write": true, 00:17:49.347 "abort": true, 00:17:49.347 "nvme_admin": true, 00:17:49.347 "nvme_io": true 00:17:49.347 }, 00:17:49.347 "memory_domains": [ 00:17:49.347 { 00:17:49.347 "dma_device_id": "system", 00:17:49.347 "dma_device_type": 1 00:17:49.347 } 00:17:49.347 ], 00:17:49.347 "driver_specific": { 00:17:49.347 "nvme": [ 00:17:49.347 { 00:17:49.347 "trid": { 00:17:49.347 "trtype": "TCP", 00:17:49.347 "adrfam": "IPv4", 00:17:49.347 "traddr": "10.0.0.2", 00:17:49.347 "trsvcid": "4420", 00:17:49.347 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:49.347 }, 00:17:49.347 "ctrlr_data": { 00:17:49.347 "cntlid": 1, 00:17:49.347 "vendor_id": "0x8086", 00:17:49.347 "model_number": "SPDK bdev Controller", 00:17:49.347 "serial_number": "00000000000000000000", 00:17:49.347 "firmware_revision": "24.05", 00:17:49.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:49.347 "oacs": { 00:17:49.347 "security": 0, 00:17:49.347 "format": 0, 00:17:49.347 "firmware": 0, 00:17:49.347 "ns_manage": 0 00:17:49.347 }, 00:17:49.347 "multi_ctrlr": true, 00:17:49.347 "ana_reporting": false 00:17:49.347 }, 00:17:49.347 "vs": { 00:17:49.347 "nvme_version": "1.3" 00:17:49.347 }, 00:17:49.347 "ns_data": { 00:17:49.347 "id": 1, 00:17:49.347 "can_share": true 00:17:49.347 } 00:17:49.347 } 00:17:49.347 ], 00:17:49.347 "mp_policy": "active_passive" 00:17:49.347 } 00:17:49.347 } 00:17:49.347 ] 00:17:49.347 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.347 19:48:30 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:49.347 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.347 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.347 [2024-04-24 19:48:30.759492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:49.347 [2024-04-24 19:48:30.759563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1954f90 (9): Bad file descriptor 00:17:49.605 [2024-04-24 19:48:30.891753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:49.605 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.605 19:48:30 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:49.605 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.605 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.605 [ 00:17:49.605 { 00:17:49.605 "name": "nvme0n1", 00:17:49.605 "aliases": [ 00:17:49.605 "7b8125e5-0065-48bd-9248-85a076461fa5" 00:17:49.605 ], 00:17:49.605 "product_name": "NVMe disk", 00:17:49.605 "block_size": 512, 00:17:49.605 "num_blocks": 2097152, 00:17:49.605 "uuid": "7b8125e5-0065-48bd-9248-85a076461fa5", 00:17:49.605 "assigned_rate_limits": { 00:17:49.605 "rw_ios_per_sec": 0, 00:17:49.605 "rw_mbytes_per_sec": 0, 00:17:49.605 "r_mbytes_per_sec": 0, 00:17:49.605 "w_mbytes_per_sec": 0 00:17:49.605 }, 00:17:49.605 "claimed": false, 00:17:49.605 "zoned": false, 00:17:49.605 "supported_io_types": { 00:17:49.605 "read": true, 00:17:49.605 "write": true, 00:17:49.605 "unmap": false, 00:17:49.605 "write_zeroes": true, 00:17:49.605 "flush": true, 00:17:49.605 "reset": true, 00:17:49.605 "compare": true, 00:17:49.605 "compare_and_write": true, 00:17:49.605 "abort": true, 00:17:49.605 "nvme_admin": true, 00:17:49.605 "nvme_io": true 00:17:49.605 }, 00:17:49.605 "memory_domains": [ 00:17:49.605 { 00:17:49.605 "dma_device_id": "system", 00:17:49.605 "dma_device_type": 1 00:17:49.605 } 00:17:49.605 ], 00:17:49.605 "driver_specific": { 00:17:49.605 "nvme": [ 00:17:49.605 { 00:17:49.605 "trid": { 00:17:49.605 "trtype": "TCP", 00:17:49.605 "adrfam": "IPv4", 00:17:49.605 "traddr": "10.0.0.2", 00:17:49.605 "trsvcid": "4420", 00:17:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:49.605 }, 00:17:49.605 "ctrlr_data": { 00:17:49.605 "cntlid": 2, 00:17:49.605 "vendor_id": "0x8086", 00:17:49.605 "model_number": "SPDK bdev Controller", 00:17:49.605 "serial_number": "00000000000000000000", 00:17:49.605 "firmware_revision": "24.05", 00:17:49.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:49.605 "oacs": { 00:17:49.605 "security": 0, 00:17:49.605 "format": 0, 00:17:49.605 "firmware": 0, 00:17:49.605 "ns_manage": 0 00:17:49.605 }, 00:17:49.605 "multi_ctrlr": true, 00:17:49.605 "ana_reporting": false 00:17:49.605 }, 00:17:49.605 "vs": { 00:17:49.605 "nvme_version": "1.3" 00:17:49.605 }, 00:17:49.605 "ns_data": { 00:17:49.605 "id": 1, 00:17:49.605 "can_share": true 00:17:49.605 } 00:17:49.605 } 00:17:49.605 ], 00:17:49.605 "mp_policy": "active_passive" 00:17:49.605 } 00:17:49.605 } 00:17:49.605 ] 00:17:49.605 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.605 19:48:30 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.605 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.605 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.605 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.605 19:48:30 -- host/async_init.sh@53 -- # mktemp 00:17:49.605 19:48:30 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.aygw7hrJTR 00:17:49.605 19:48:30 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:49.605 19:48:30 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.aygw7hrJTR 00:17:49.605 19:48:30 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:49.605 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.605 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.605 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.605 19:48:30 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:49.605 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.605 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.605 [2024-04-24 19:48:30.936092] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:49.605 [2024-04-24 19:48:30.936212] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:49.605 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.605 19:48:30 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aygw7hrJTR 00:17:49.605 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.605 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.605 [2024-04-24 19:48:30.944111] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:49.605 19:48:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.605 19:48:30 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aygw7hrJTR 00:17:49.605 19:48:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.605 19:48:30 -- common/autotest_common.sh@10 -- # set +x 00:17:49.605 [2024-04-24 19:48:30.952126] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.605 [2024-04-24 19:48:30.952177] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:49.605 nvme0n1 00:17:49.605 19:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.605 19:48:31 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:49.605 19:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.605 19:48:31 -- common/autotest_common.sh@10 -- # set +x 00:17:49.605 [ 00:17:49.605 { 00:17:49.605 "name": "nvme0n1", 00:17:49.605 "aliases": [ 00:17:49.605 "7b8125e5-0065-48bd-9248-85a076461fa5" 00:17:49.605 ], 00:17:49.605 "product_name": "NVMe disk", 00:17:49.605 "block_size": 512, 00:17:49.605 "num_blocks": 2097152, 00:17:49.605 "uuid": "7b8125e5-0065-48bd-9248-85a076461fa5", 00:17:49.605 "assigned_rate_limits": { 00:17:49.605 "rw_ios_per_sec": 0, 00:17:49.605 "rw_mbytes_per_sec": 0, 00:17:49.606 "r_mbytes_per_sec": 0, 00:17:49.606 "w_mbytes_per_sec": 0 00:17:49.606 }, 00:17:49.606 "claimed": false, 00:17:49.606 "zoned": false, 00:17:49.606 "supported_io_types": { 00:17:49.606 "read": true, 00:17:49.606 "write": true, 00:17:49.606 "unmap": false, 00:17:49.606 "write_zeroes": true, 00:17:49.606 "flush": true, 00:17:49.606 "reset": true, 00:17:49.606 "compare": true, 00:17:49.606 "compare_and_write": true, 00:17:49.606 "abort": true, 00:17:49.606 "nvme_admin": true, 00:17:49.606 "nvme_io": true 00:17:49.606 }, 00:17:49.606 "memory_domains": [ 00:17:49.606 { 00:17:49.606 "dma_device_id": "system", 00:17:49.606 "dma_device_type": 1 00:17:49.606 } 00:17:49.606 ], 00:17:49.606 "driver_specific": { 00:17:49.606 "nvme": [ 00:17:49.606 { 00:17:49.606 "trid": { 00:17:49.606 "trtype": "TCP", 00:17:49.606 "adrfam": "IPv4", 00:17:49.606 "traddr": "10.0.0.2", 00:17:49.606 "trsvcid": "4421", 00:17:49.606 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:49.606 }, 00:17:49.606 "ctrlr_data": { 00:17:49.606 "cntlid": 3, 00:17:49.606 "vendor_id": "0x8086", 00:17:49.606 "model_number": "SPDK bdev Controller", 00:17:49.606 "serial_number": "00000000000000000000", 00:17:49.606 "firmware_revision": "24.05", 00:17:49.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:49.606 "oacs": { 00:17:49.606 "security": 0, 00:17:49.606 "format": 0, 00:17:49.606 "firmware": 0, 00:17:49.606 "ns_manage": 0 00:17:49.606 }, 00:17:49.606 "multi_ctrlr": true, 00:17:49.606 "ana_reporting": false 00:17:49.606 }, 00:17:49.606 "vs": { 00:17:49.606 "nvme_version": "1.3" 00:17:49.606 }, 00:17:49.606 "ns_data": { 00:17:49.606 "id": 1, 00:17:49.606 "can_share": true 00:17:49.606 } 00:17:49.606 } 00:17:49.606 ], 00:17:49.606 "mp_policy": "active_passive" 00:17:49.606 } 00:17:49.606 } 00:17:49.606 ] 00:17:49.606 19:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.606 19:48:31 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.606 19:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.606 19:48:31 -- common/autotest_common.sh@10 -- # set +x 00:17:49.606 19:48:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.606 19:48:31 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.aygw7hrJTR 00:17:49.606 19:48:31 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:49.606 19:48:31 -- host/async_init.sh@78 -- # nvmftestfini 00:17:49.606 19:48:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:49.606 19:48:31 -- nvmf/common.sh@117 -- # sync 00:17:49.606 19:48:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:49.606 19:48:31 -- nvmf/common.sh@120 -- # set +e 00:17:49.606 19:48:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.606 19:48:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:49.606 rmmod nvme_tcp 00:17:49.606 rmmod nvme_fabrics 00:17:49.606 rmmod nvme_keyring 00:17:49.606 19:48:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.606 19:48:31 -- nvmf/common.sh@124 -- # set -e 00:17:49.606 19:48:31 -- nvmf/common.sh@125 -- # return 0 00:17:49.606 19:48:31 -- nvmf/common.sh@478 -- # '[' -n 1732252 ']' 00:17:49.606 19:48:31 -- nvmf/common.sh@479 -- # killprocess 1732252 00:17:49.606 19:48:31 -- common/autotest_common.sh@936 -- # '[' -z 1732252 ']' 00:17:49.606 19:48:31 -- common/autotest_common.sh@940 -- # kill -0 1732252 00:17:49.606 19:48:31 -- common/autotest_common.sh@941 -- # uname 00:17:49.606 19:48:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.606 19:48:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1732252 00:17:49.864 19:48:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:49.864 19:48:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:49.864 19:48:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1732252' 00:17:49.864 killing process with pid 1732252 00:17:49.864 19:48:31 -- common/autotest_common.sh@955 -- # kill 1732252 00:17:49.864 [2024-04-24 19:48:31.142849] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:49.864 [2024-04-24 19:48:31.142882] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:49.864 19:48:31 -- common/autotest_common.sh@960 -- # wait 1732252 00:17:50.124 19:48:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:50.124 19:48:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:50.124 19:48:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:50.124 19:48:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.124 19:48:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.124 19:48:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.124 19:48:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.124 19:48:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.024 19:48:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:52.024 00:17:52.024 real 0m5.471s 00:17:52.024 user 0m2.054s 00:17:52.024 sys 0m1.803s 00:17:52.024 19:48:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:52.024 19:48:33 -- common/autotest_common.sh@10 -- # set +x 00:17:52.024 ************************************ 00:17:52.024 END TEST nvmf_async_init 00:17:52.024 ************************************ 00:17:52.024 19:48:33 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:52.024 19:48:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:52.024 19:48:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.024 19:48:33 -- common/autotest_common.sh@10 -- # set +x 00:17:52.283 ************************************ 00:17:52.283 START TEST dma 00:17:52.283 ************************************ 00:17:52.283 19:48:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:52.283 * Looking for test storage... 00:17:52.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:52.283 19:48:33 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.283 19:48:33 -- nvmf/common.sh@7 -- # uname -s 00:17:52.283 19:48:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.283 19:48:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.283 19:48:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.283 19:48:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.283 19:48:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.283 19:48:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.283 19:48:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.283 19:48:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.283 19:48:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.283 19:48:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.283 19:48:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.283 19:48:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.283 19:48:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.283 19:48:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.283 19:48:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.283 19:48:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.283 19:48:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.283 19:48:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.283 19:48:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.283 19:48:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.283 19:48:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.283 19:48:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.283 19:48:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.283 19:48:33 -- paths/export.sh@5 -- # export PATH 00:17:52.283 19:48:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.283 19:48:33 -- nvmf/common.sh@47 -- # : 0 00:17:52.283 19:48:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.283 19:48:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.283 19:48:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.283 19:48:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.283 19:48:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.283 19:48:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.283 19:48:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.283 19:48:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.283 19:48:33 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:52.283 19:48:33 -- host/dma.sh@13 -- # exit 0 00:17:52.283 00:17:52.283 real 0m0.068s 00:17:52.283 user 0m0.028s 00:17:52.283 sys 0m0.046s 00:17:52.284 19:48:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:52.284 19:48:33 -- common/autotest_common.sh@10 -- # set +x 00:17:52.284 ************************************ 00:17:52.284 END TEST dma 00:17:52.284 ************************************ 00:17:52.284 19:48:33 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:52.284 19:48:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:52.284 19:48:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:52.284 19:48:33 -- common/autotest_common.sh@10 -- # set +x 00:17:52.284 ************************************ 00:17:52.284 START TEST nvmf_identify 00:17:52.284 ************************************ 00:17:52.284 19:48:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:52.284 * Looking for test storage... 00:17:52.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:52.284 19:48:33 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.284 19:48:33 -- nvmf/common.sh@7 -- # uname -s 00:17:52.284 19:48:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.284 19:48:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.284 19:48:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.284 19:48:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.284 19:48:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.284 19:48:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.284 19:48:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.284 19:48:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.284 19:48:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.284 19:48:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.543 19:48:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.543 19:48:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:52.543 19:48:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.543 19:48:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.543 19:48:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.543 19:48:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.543 19:48:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.543 19:48:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.543 19:48:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.543 19:48:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.543 19:48:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.543 19:48:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.543 19:48:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.543 19:48:33 -- paths/export.sh@5 -- # export PATH 00:17:52.543 19:48:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.543 19:48:33 -- nvmf/common.sh@47 -- # : 0 00:17:52.543 19:48:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.543 19:48:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.543 19:48:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.543 19:48:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.543 19:48:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.543 19:48:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.543 19:48:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.543 19:48:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.543 19:48:33 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:52.543 19:48:33 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:52.543 19:48:33 -- host/identify.sh@14 -- # nvmftestinit 00:17:52.543 19:48:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:52.543 19:48:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.543 19:48:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:52.543 19:48:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:52.543 19:48:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:52.543 19:48:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.543 19:48:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.543 19:48:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.543 19:48:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:52.543 19:48:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:52.543 19:48:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:52.543 19:48:33 -- common/autotest_common.sh@10 -- # set +x 00:17:54.473 19:48:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:54.473 19:48:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:54.473 19:48:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:54.473 19:48:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:54.473 19:48:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:54.473 19:48:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:54.473 19:48:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:54.473 19:48:35 -- nvmf/common.sh@295 -- # net_devs=() 00:17:54.473 19:48:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:54.473 19:48:35 -- nvmf/common.sh@296 -- # e810=() 00:17:54.473 19:48:35 -- nvmf/common.sh@296 -- # local -ga e810 00:17:54.473 19:48:35 -- nvmf/common.sh@297 -- # x722=() 00:17:54.473 19:48:35 -- nvmf/common.sh@297 -- # local -ga x722 00:17:54.473 19:48:35 -- nvmf/common.sh@298 -- # mlx=() 00:17:54.473 19:48:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:54.473 19:48:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.473 19:48:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:54.473 19:48:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:54.473 19:48:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:54.473 19:48:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:54.473 19:48:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:54.473 19:48:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:54.473 19:48:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.473 19:48:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:54.473 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:54.474 19:48:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.474 19:48:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:54.474 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:54.474 19:48:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:54.474 19:48:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.474 19:48:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.474 19:48:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:54.474 19:48:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.474 19:48:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:54.474 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:54.474 19:48:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.474 19:48:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.474 19:48:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.474 19:48:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:54.474 19:48:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.474 19:48:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:54.474 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:54.474 19:48:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.474 19:48:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:54.474 19:48:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:54.474 19:48:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:54.474 19:48:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:54.474 19:48:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.474 19:48:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.474 19:48:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:54.474 19:48:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:54.474 19:48:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:54.474 19:48:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:54.474 19:48:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:54.474 19:48:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.474 19:48:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:54.474 19:48:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:54.474 19:48:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:54.474 19:48:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:54.474 19:48:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:54.474 19:48:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:54.474 19:48:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:54.474 19:48:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:54.474 19:48:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:54.474 19:48:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:54.474 19:48:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:54.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:17:54.474 00:17:54.474 --- 10.0.0.2 ping statistics --- 00:17:54.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.474 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:17:54.474 19:48:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:54.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:17:54.474 00:17:54.474 --- 10.0.0.1 ping statistics --- 00:17:54.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.474 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:54.474 19:48:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.474 19:48:35 -- nvmf/common.sh@411 -- # return 0 00:17:54.474 19:48:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:54.474 19:48:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.474 19:48:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:54.474 19:48:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.474 19:48:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:54.474 19:48:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:54.474 19:48:35 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:54.474 19:48:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:54.474 19:48:35 -- common/autotest_common.sh@10 -- # set +x 00:17:54.474 19:48:35 -- host/identify.sh@19 -- # nvmfpid=1734391 00:17:54.474 19:48:35 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:54.474 19:48:35 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.474 19:48:35 -- host/identify.sh@23 -- # waitforlisten 1734391 00:17:54.474 19:48:35 -- common/autotest_common.sh@817 -- # '[' -z 1734391 ']' 00:17:54.474 19:48:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.474 19:48:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:54.474 19:48:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.474 19:48:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:54.474 19:48:35 -- common/autotest_common.sh@10 -- # set +x 00:17:54.742 [2024-04-24 19:48:35.973301] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:54.742 [2024-04-24 19:48:35.973393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.742 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.742 [2024-04-24 19:48:36.040515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:54.742 [2024-04-24 19:48:36.156281] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.742 [2024-04-24 19:48:36.156337] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.742 [2024-04-24 19:48:36.156350] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.742 [2024-04-24 19:48:36.156365] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.742 [2024-04-24 19:48:36.156375] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.742 [2024-04-24 19:48:36.156487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.742 [2024-04-24 19:48:36.156549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.742 [2024-04-24 19:48:36.156617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.742 [2024-04-24 19:48:36.156620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.002 19:48:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:55.002 19:48:36 -- common/autotest_common.sh@850 -- # return 0 00:17:55.002 19:48:36 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:55.002 19:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.002 19:48:36 -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 [2024-04-24 19:48:36.291423] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.002 19:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.002 19:48:36 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:55.002 19:48:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:55.002 19:48:36 -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 19:48:36 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:55.002 19:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.002 19:48:36 -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 Malloc0 00:17:55.002 19:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.002 19:48:36 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:55.002 19:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.002 19:48:36 -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 19:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.002 19:48:36 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:55.002 19:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.002 19:48:36 -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 19:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.002 19:48:36 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.002 19:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.002 19:48:36 -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 [2024-04-24 19:48:36.373238] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.002 19:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.002 19:48:36 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:55.002 19:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.002 19:48:36 -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 19:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.002 19:48:36 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:55.002 19:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.002 19:48:36 -- common/autotest_common.sh@10 -- # set +x 00:17:55.002 [2024-04-24 19:48:36.389001] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:55.002 [ 00:17:55.002 { 00:17:55.002 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:55.002 "subtype": "Discovery", 00:17:55.002 "listen_addresses": [ 00:17:55.002 { 00:17:55.002 "transport": "TCP", 00:17:55.002 "trtype": "TCP", 00:17:55.002 "adrfam": "IPv4", 00:17:55.002 "traddr": "10.0.0.2", 00:17:55.002 "trsvcid": "4420" 00:17:55.002 } 00:17:55.002 ], 00:17:55.002 "allow_any_host": true, 00:17:55.002 "hosts": [] 00:17:55.002 }, 00:17:55.002 { 00:17:55.002 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.002 "subtype": "NVMe", 00:17:55.002 "listen_addresses": [ 00:17:55.002 { 00:17:55.002 "transport": "TCP", 00:17:55.002 "trtype": "TCP", 00:17:55.002 "adrfam": "IPv4", 00:17:55.002 "traddr": "10.0.0.2", 00:17:55.002 "trsvcid": "4420" 00:17:55.002 } 00:17:55.002 ], 00:17:55.002 "allow_any_host": true, 00:17:55.002 "hosts": [], 00:17:55.002 "serial_number": "SPDK00000000000001", 00:17:55.002 "model_number": "SPDK bdev Controller", 00:17:55.002 "max_namespaces": 32, 00:17:55.002 "min_cntlid": 1, 00:17:55.002 "max_cntlid": 65519, 00:17:55.002 "namespaces": [ 00:17:55.002 { 00:17:55.002 "nsid": 1, 00:17:55.002 "bdev_name": "Malloc0", 00:17:55.002 "name": "Malloc0", 00:17:55.002 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:55.002 "eui64": "ABCDEF0123456789", 00:17:55.002 "uuid": "228d6a18-85cf-4f6b-966b-71a499bd81bc" 00:17:55.002 } 00:17:55.002 ] 00:17:55.002 } 00:17:55.002 ] 00:17:55.002 19:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.002 19:48:36 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:55.003 [2024-04-24 19:48:36.416185] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:55.003 [2024-04-24 19:48:36.416235] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734430 ] 00:17:55.003 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.003 [2024-04-24 19:48:36.454239] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:55.003 [2024-04-24 19:48:36.454310] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:55.003 [2024-04-24 19:48:36.454321] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:55.003 [2024-04-24 19:48:36.454337] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:55.003 [2024-04-24 19:48:36.454351] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:55.003 [2024-04-24 19:48:36.454698] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:55.003 [2024-04-24 19:48:36.454757] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c90d00 0 00:17:55.003 [2024-04-24 19:48:36.470642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:55.003 [2024-04-24 19:48:36.470665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:55.003 [2024-04-24 19:48:36.470675] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:55.003 [2024-04-24 19:48:36.470681] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:55.003 [2024-04-24 19:48:36.470736] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.470750] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.470758] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.003 [2024-04-24 19:48:36.470779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:55.003 [2024-04-24 19:48:36.470806] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.003 [2024-04-24 19:48:36.476642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.003 [2024-04-24 19:48:36.476661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.003 [2024-04-24 19:48:36.476668] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.476678] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cefec0) on tqpair=0x1c90d00 00:17:55.003 [2024-04-24 19:48:36.476698] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:55.003 [2024-04-24 19:48:36.476710] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:55.003 [2024-04-24 19:48:36.476719] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:55.003 [2024-04-24 19:48:36.476742] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.476751] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.476758] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.003 [2024-04-24 19:48:36.476770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.003 [2024-04-24 19:48:36.476795] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.003 [2024-04-24 19:48:36.477000] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.003 [2024-04-24 19:48:36.477013] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.003 [2024-04-24 19:48:36.477020] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.477027] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cefec0) on tqpair=0x1c90d00 00:17:55.003 [2024-04-24 19:48:36.477038] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:55.003 [2024-04-24 19:48:36.477051] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:55.003 [2024-04-24 19:48:36.477064] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.477072] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.477078] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.003 [2024-04-24 19:48:36.477089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.003 [2024-04-24 19:48:36.477111] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.003 [2024-04-24 19:48:36.477302] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.003 [2024-04-24 19:48:36.477317] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.003 [2024-04-24 19:48:36.477324] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.477331] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cefec0) on tqpair=0x1c90d00 00:17:55.003 [2024-04-24 19:48:36.477342] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:55.003 [2024-04-24 19:48:36.477356] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:55.003 [2024-04-24 19:48:36.477369] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.477376] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.477383] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.003 [2024-04-24 19:48:36.477394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.003 [2024-04-24 19:48:36.477415] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.003 [2024-04-24 19:48:36.477599] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.003 [2024-04-24 19:48:36.477614] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.003 [2024-04-24 19:48:36.477621] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.481653] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cefec0) on tqpair=0x1c90d00 00:17:55.003 [2024-04-24 19:48:36.481672] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:55.003 [2024-04-24 19:48:36.481690] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.481700] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.481707] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.003 [2024-04-24 19:48:36.481718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.003 [2024-04-24 19:48:36.481741] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.003 [2024-04-24 19:48:36.481929] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.003 [2024-04-24 19:48:36.481942] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.003 [2024-04-24 19:48:36.481949] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.481960] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cefec0) on tqpair=0x1c90d00 00:17:55.003 [2024-04-24 19:48:36.481972] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:55.003 [2024-04-24 19:48:36.481980] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:55.003 [2024-04-24 19:48:36.481993] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:55.003 [2024-04-24 19:48:36.482104] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:55.003 [2024-04-24 19:48:36.482113] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:55.003 [2024-04-24 19:48:36.482128] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.482136] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.482143] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.003 [2024-04-24 19:48:36.482154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.003 [2024-04-24 19:48:36.482175] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.003 [2024-04-24 19:48:36.482355] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.003 [2024-04-24 19:48:36.482367] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.003 [2024-04-24 19:48:36.482375] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.482381] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cefec0) on tqpair=0x1c90d00 00:17:55.003 [2024-04-24 19:48:36.482392] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:55.003 [2024-04-24 19:48:36.482407] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.482417] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.482424] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.003 [2024-04-24 19:48:36.482434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.003 [2024-04-24 19:48:36.482454] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.003 [2024-04-24 19:48:36.482618] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.003 [2024-04-24 19:48:36.482643] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.003 [2024-04-24 19:48:36.482652] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.003 [2024-04-24 19:48:36.482659] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cefec0) on tqpair=0x1c90d00 00:17:55.003 [2024-04-24 19:48:36.482669] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:55.003 [2024-04-24 19:48:36.482678] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:55.003 [2024-04-24 19:48:36.482691] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:55.003 [2024-04-24 19:48:36.482706] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:55.003 [2024-04-24 19:48:36.482725] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.482734] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.482749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.004 [2024-04-24 19:48:36.482772] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.004 [2024-04-24 19:48:36.482990] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.004 [2024-04-24 19:48:36.483006] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.004 [2024-04-24 19:48:36.483013] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483020] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c90d00): datao=0, datal=4096, cccid=0 00:17:55.004 [2024-04-24 19:48:36.483028] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cefec0) on tqpair(0x1c90d00): expected_datao=0, payload_size=4096 00:17:55.004 [2024-04-24 19:48:36.483037] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483048] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483058] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483100] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.004 [2024-04-24 19:48:36.483111] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.004 [2024-04-24 19:48:36.483118] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483125] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cefec0) on tqpair=0x1c90d00 00:17:55.004 [2024-04-24 19:48:36.483139] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:55.004 [2024-04-24 19:48:36.483148] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:55.004 [2024-04-24 19:48:36.483156] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:55.004 [2024-04-24 19:48:36.483165] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:55.004 [2024-04-24 19:48:36.483173] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:55.004 [2024-04-24 19:48:36.483182] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:55.004 [2024-04-24 19:48:36.483196] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:55.004 [2024-04-24 19:48:36.483210] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483218] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483225] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.483236] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.004 [2024-04-24 19:48:36.483257] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.004 [2024-04-24 19:48:36.483471] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.004 [2024-04-24 19:48:36.483483] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.004 [2024-04-24 19:48:36.483490] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483496] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cefec0) on tqpair=0x1c90d00 00:17:55.004 [2024-04-24 19:48:36.483511] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483519] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483526] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.483536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.004 [2024-04-24 19:48:36.483550] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483558] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483565] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.483574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.004 [2024-04-24 19:48:36.483584] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483591] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483598] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.483606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.004 [2024-04-24 19:48:36.483616] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483623] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483640] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.483650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.004 [2024-04-24 19:48:36.483659] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:55.004 [2024-04-24 19:48:36.483679] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:55.004 [2024-04-24 19:48:36.483692] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.483699] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.483710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.004 [2024-04-24 19:48:36.483733] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cefec0, cid 0, qid 0 00:17:55.004 [2024-04-24 19:48:36.483744] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf0020, cid 1, qid 0 00:17:55.004 [2024-04-24 19:48:36.483752] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf0180, cid 2, qid 0 00:17:55.004 [2024-04-24 19:48:36.483760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf02e0, cid 3, qid 0 00:17:55.004 [2024-04-24 19:48:36.483768] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf0440, cid 4, qid 0 00:17:55.004 [2024-04-24 19:48:36.483988] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.004 [2024-04-24 19:48:36.484003] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.004 [2024-04-24 19:48:36.484010] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484017] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf0440) on tqpair=0x1c90d00 00:17:55.004 [2024-04-24 19:48:36.484029] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:55.004 [2024-04-24 19:48:36.484038] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:55.004 [2024-04-24 19:48:36.484056] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484065] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.484077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.004 [2024-04-24 19:48:36.484098] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf0440, cid 4, qid 0 00:17:55.004 [2024-04-24 19:48:36.484301] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.004 [2024-04-24 19:48:36.484316] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.004 [2024-04-24 19:48:36.484322] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484329] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c90d00): datao=0, datal=4096, cccid=4 00:17:55.004 [2024-04-24 19:48:36.484337] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf0440) on tqpair(0x1c90d00): expected_datao=0, payload_size=4096 00:17:55.004 [2024-04-24 19:48:36.484344] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484378] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484388] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484491] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.004 [2024-04-24 19:48:36.484506] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.004 [2024-04-24 19:48:36.484513] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484519] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf0440) on tqpair=0x1c90d00 00:17:55.004 [2024-04-24 19:48:36.484540] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:55.004 [2024-04-24 19:48:36.484573] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484583] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.484594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.004 [2024-04-24 19:48:36.484606] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484614] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484621] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c90d00) 00:17:55.004 [2024-04-24 19:48:36.484638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.004 [2024-04-24 19:48:36.484668] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf0440, cid 4, qid 0 00:17:55.004 [2024-04-24 19:48:36.484680] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf05a0, cid 5, qid 0 00:17:55.004 [2024-04-24 19:48:36.484900] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.004 [2024-04-24 19:48:36.484913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.004 [2024-04-24 19:48:36.484920] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484926] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c90d00): datao=0, datal=1024, cccid=4 00:17:55.004 [2024-04-24 19:48:36.484934] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf0440) on tqpair(0x1c90d00): expected_datao=0, payload_size=1024 00:17:55.004 [2024-04-24 19:48:36.484941] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484951] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484958] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.004 [2024-04-24 19:48:36.484967] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.004 [2024-04-24 19:48:36.484976] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.005 [2024-04-24 19:48:36.484982] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.005 [2024-04-24 19:48:36.484989] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf05a0) on tqpair=0x1c90d00 00:17:55.266 [2024-04-24 19:48:36.529644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.266 [2024-04-24 19:48:36.529662] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.266 [2024-04-24 19:48:36.529674] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.529682] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf0440) on tqpair=0x1c90d00 00:17:55.266 [2024-04-24 19:48:36.529702] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.529712] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c90d00) 00:17:55.266 [2024-04-24 19:48:36.529724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.266 [2024-04-24 19:48:36.529755] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf0440, cid 4, qid 0 00:17:55.266 [2024-04-24 19:48:36.529963] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.266 [2024-04-24 19:48:36.529979] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.266 [2024-04-24 19:48:36.529986] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.529993] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c90d00): datao=0, datal=3072, cccid=4 00:17:55.266 [2024-04-24 19:48:36.530000] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf0440) on tqpair(0x1c90d00): expected_datao=0, payload_size=3072 00:17:55.266 [2024-04-24 19:48:36.530008] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.530018] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.530026] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.530102] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.266 [2024-04-24 19:48:36.530113] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.266 [2024-04-24 19:48:36.530120] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.530127] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf0440) on tqpair=0x1c90d00 00:17:55.266 [2024-04-24 19:48:36.530145] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.530154] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c90d00) 00:17:55.266 [2024-04-24 19:48:36.530165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.266 [2024-04-24 19:48:36.530193] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf0440, cid 4, qid 0 00:17:55.266 [2024-04-24 19:48:36.530364] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.266 [2024-04-24 19:48:36.530376] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.266 [2024-04-24 19:48:36.530383] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.530389] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c90d00): datao=0, datal=8, cccid=4 00:17:55.266 [2024-04-24 19:48:36.530397] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf0440) on tqpair(0x1c90d00): expected_datao=0, payload_size=8 00:17:55.266 [2024-04-24 19:48:36.530404] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.530414] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.530421] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.570846] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.266 [2024-04-24 19:48:36.570865] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.266 [2024-04-24 19:48:36.570873] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.266 [2024-04-24 19:48:36.570880] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf0440) on tqpair=0x1c90d00 00:17:55.266 ===================================================== 00:17:55.266 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:55.266 ===================================================== 00:17:55.266 Controller Capabilities/Features 00:17:55.266 ================================ 00:17:55.266 Vendor ID: 0000 00:17:55.266 Subsystem Vendor ID: 0000 00:17:55.266 Serial Number: .................... 00:17:55.266 Model Number: ........................................ 00:17:55.266 Firmware Version: 24.05 00:17:55.266 Recommended Arb Burst: 0 00:17:55.266 IEEE OUI Identifier: 00 00 00 00:17:55.266 Multi-path I/O 00:17:55.266 May have multiple subsystem ports: No 00:17:55.266 May have multiple controllers: No 00:17:55.266 Associated with SR-IOV VF: No 00:17:55.266 Max Data Transfer Size: 131072 00:17:55.266 Max Number of Namespaces: 0 00:17:55.266 Max Number of I/O Queues: 1024 00:17:55.266 NVMe Specification Version (VS): 1.3 00:17:55.266 NVMe Specification Version (Identify): 1.3 00:17:55.266 Maximum Queue Entries: 128 00:17:55.266 Contiguous Queues Required: Yes 00:17:55.266 Arbitration Mechanisms Supported 00:17:55.266 Weighted Round Robin: Not Supported 00:17:55.266 Vendor Specific: Not Supported 00:17:55.266 Reset Timeout: 15000 ms 00:17:55.266 Doorbell Stride: 4 bytes 00:17:55.266 NVM Subsystem Reset: Not Supported 00:17:55.266 Command Sets Supported 00:17:55.266 NVM Command Set: Supported 00:17:55.266 Boot Partition: Not Supported 00:17:55.266 Memory Page Size Minimum: 4096 bytes 00:17:55.266 Memory Page Size Maximum: 4096 bytes 00:17:55.266 Persistent Memory Region: Not Supported 00:17:55.266 Optional Asynchronous Events Supported 00:17:55.266 Namespace Attribute Notices: Not Supported 00:17:55.266 Firmware Activation Notices: Not Supported 00:17:55.266 ANA Change Notices: Not Supported 00:17:55.266 PLE Aggregate Log Change Notices: Not Supported 00:17:55.266 LBA Status Info Alert Notices: Not Supported 00:17:55.266 EGE Aggregate Log Change Notices: Not Supported 00:17:55.266 Normal NVM Subsystem Shutdown event: Not Supported 00:17:55.266 Zone Descriptor Change Notices: Not Supported 00:17:55.266 Discovery Log Change Notices: Supported 00:17:55.266 Controller Attributes 00:17:55.266 128-bit Host Identifier: Not Supported 00:17:55.266 Non-Operational Permissive Mode: Not Supported 00:17:55.266 NVM Sets: Not Supported 00:17:55.266 Read Recovery Levels: Not Supported 00:17:55.266 Endurance Groups: Not Supported 00:17:55.266 Predictable Latency Mode: Not Supported 00:17:55.266 Traffic Based Keep ALive: Not Supported 00:17:55.266 Namespace Granularity: Not Supported 00:17:55.266 SQ Associations: Not Supported 00:17:55.266 UUID List: Not Supported 00:17:55.266 Multi-Domain Subsystem: Not Supported 00:17:55.266 Fixed Capacity Management: Not Supported 00:17:55.266 Variable Capacity Management: Not Supported 00:17:55.266 Delete Endurance Group: Not Supported 00:17:55.266 Delete NVM Set: Not Supported 00:17:55.266 Extended LBA Formats Supported: Not Supported 00:17:55.266 Flexible Data Placement Supported: Not Supported 00:17:55.266 00:17:55.266 Controller Memory Buffer Support 00:17:55.266 ================================ 00:17:55.266 Supported: No 00:17:55.266 00:17:55.266 Persistent Memory Region Support 00:17:55.266 ================================ 00:17:55.266 Supported: No 00:17:55.266 00:17:55.266 Admin Command Set Attributes 00:17:55.266 ============================ 00:17:55.266 Security Send/Receive: Not Supported 00:17:55.266 Format NVM: Not Supported 00:17:55.266 Firmware Activate/Download: Not Supported 00:17:55.266 Namespace Management: Not Supported 00:17:55.266 Device Self-Test: Not Supported 00:17:55.266 Directives: Not Supported 00:17:55.266 NVMe-MI: Not Supported 00:17:55.266 Virtualization Management: Not Supported 00:17:55.266 Doorbell Buffer Config: Not Supported 00:17:55.266 Get LBA Status Capability: Not Supported 00:17:55.266 Command & Feature Lockdown Capability: Not Supported 00:17:55.266 Abort Command Limit: 1 00:17:55.266 Async Event Request Limit: 4 00:17:55.266 Number of Firmware Slots: N/A 00:17:55.266 Firmware Slot 1 Read-Only: N/A 00:17:55.266 Firmware Activation Without Reset: N/A 00:17:55.266 Multiple Update Detection Support: N/A 00:17:55.266 Firmware Update Granularity: No Information Provided 00:17:55.266 Per-Namespace SMART Log: No 00:17:55.266 Asymmetric Namespace Access Log Page: Not Supported 00:17:55.266 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:55.266 Command Effects Log Page: Not Supported 00:17:55.266 Get Log Page Extended Data: Supported 00:17:55.266 Telemetry Log Pages: Not Supported 00:17:55.266 Persistent Event Log Pages: Not Supported 00:17:55.267 Supported Log Pages Log Page: May Support 00:17:55.267 Commands Supported & Effects Log Page: Not Supported 00:17:55.267 Feature Identifiers & Effects Log Page:May Support 00:17:55.267 NVMe-MI Commands & Effects Log Page: May Support 00:17:55.267 Data Area 4 for Telemetry Log: Not Supported 00:17:55.267 Error Log Page Entries Supported: 128 00:17:55.267 Keep Alive: Not Supported 00:17:55.267 00:17:55.267 NVM Command Set Attributes 00:17:55.267 ========================== 00:17:55.267 Submission Queue Entry Size 00:17:55.267 Max: 1 00:17:55.267 Min: 1 00:17:55.267 Completion Queue Entry Size 00:17:55.267 Max: 1 00:17:55.267 Min: 1 00:17:55.267 Number of Namespaces: 0 00:17:55.267 Compare Command: Not Supported 00:17:55.267 Write Uncorrectable Command: Not Supported 00:17:55.267 Dataset Management Command: Not Supported 00:17:55.267 Write Zeroes Command: Not Supported 00:17:55.267 Set Features Save Field: Not Supported 00:17:55.267 Reservations: Not Supported 00:17:55.267 Timestamp: Not Supported 00:17:55.267 Copy: Not Supported 00:17:55.267 Volatile Write Cache: Not Present 00:17:55.267 Atomic Write Unit (Normal): 1 00:17:55.267 Atomic Write Unit (PFail): 1 00:17:55.267 Atomic Compare & Write Unit: 1 00:17:55.267 Fused Compare & Write: Supported 00:17:55.267 Scatter-Gather List 00:17:55.267 SGL Command Set: Supported 00:17:55.267 SGL Keyed: Supported 00:17:55.267 SGL Bit Bucket Descriptor: Not Supported 00:17:55.267 SGL Metadata Pointer: Not Supported 00:17:55.267 Oversized SGL: Not Supported 00:17:55.267 SGL Metadata Address: Not Supported 00:17:55.267 SGL Offset: Supported 00:17:55.267 Transport SGL Data Block: Not Supported 00:17:55.267 Replay Protected Memory Block: Not Supported 00:17:55.267 00:17:55.267 Firmware Slot Information 00:17:55.267 ========================= 00:17:55.267 Active slot: 0 00:17:55.267 00:17:55.267 00:17:55.267 Error Log 00:17:55.267 ========= 00:17:55.267 00:17:55.267 Active Namespaces 00:17:55.267 ================= 00:17:55.267 Discovery Log Page 00:17:55.267 ================== 00:17:55.267 Generation Counter: 2 00:17:55.267 Number of Records: 2 00:17:55.267 Record Format: 0 00:17:55.267 00:17:55.267 Discovery Log Entry 0 00:17:55.267 ---------------------- 00:17:55.267 Transport Type: 3 (TCP) 00:17:55.267 Address Family: 1 (IPv4) 00:17:55.267 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:55.267 Entry Flags: 00:17:55.267 Duplicate Returned Information: 1 00:17:55.267 Explicit Persistent Connection Support for Discovery: 1 00:17:55.267 Transport Requirements: 00:17:55.267 Secure Channel: Not Required 00:17:55.267 Port ID: 0 (0x0000) 00:17:55.267 Controller ID: 65535 (0xffff) 00:17:55.267 Admin Max SQ Size: 128 00:17:55.267 Transport Service Identifier: 4420 00:17:55.267 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:55.267 Transport Address: 10.0.0.2 00:17:55.267 Discovery Log Entry 1 00:17:55.267 ---------------------- 00:17:55.267 Transport Type: 3 (TCP) 00:17:55.267 Address Family: 1 (IPv4) 00:17:55.267 Subsystem Type: 2 (NVM Subsystem) 00:17:55.267 Entry Flags: 00:17:55.267 Duplicate Returned Information: 0 00:17:55.267 Explicit Persistent Connection Support for Discovery: 0 00:17:55.267 Transport Requirements: 00:17:55.267 Secure Channel: Not Required 00:17:55.267 Port ID: 0 (0x0000) 00:17:55.267 Controller ID: 65535 (0xffff) 00:17:55.267 Admin Max SQ Size: 128 00:17:55.267 Transport Service Identifier: 4420 00:17:55.267 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:55.267 Transport Address: 10.0.0.2 [2024-04-24 19:48:36.571009] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:55.267 [2024-04-24 19:48:36.571037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.267 [2024-04-24 19:48:36.571054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.267 [2024-04-24 19:48:36.571065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.267 [2024-04-24 19:48:36.571074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.267 [2024-04-24 19:48:36.571089] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.571097] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.571104] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c90d00) 00:17:55.267 [2024-04-24 19:48:36.571115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.267 [2024-04-24 19:48:36.571156] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf02e0, cid 3, qid 0 00:17:55.267 [2024-04-24 19:48:36.571365] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.267 [2024-04-24 19:48:36.571381] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.267 [2024-04-24 19:48:36.571388] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.571394] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf02e0) on tqpair=0x1c90d00 00:17:55.267 [2024-04-24 19:48:36.571410] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.571418] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.571425] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c90d00) 00:17:55.267 [2024-04-24 19:48:36.571436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.267 [2024-04-24 19:48:36.571463] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf02e0, cid 3, qid 0 00:17:55.267 [2024-04-24 19:48:36.571640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.267 [2024-04-24 19:48:36.571654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.267 [2024-04-24 19:48:36.571661] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.571668] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf02e0) on tqpair=0x1c90d00 00:17:55.267 [2024-04-24 19:48:36.571679] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:55.267 [2024-04-24 19:48:36.571688] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:55.267 [2024-04-24 19:48:36.571704] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.571713] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.571720] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c90d00) 00:17:55.267 [2024-04-24 19:48:36.571731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.267 [2024-04-24 19:48:36.571756] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf02e0, cid 3, qid 0 00:17:55.267 [2024-04-24 19:48:36.571953] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.267 [2024-04-24 19:48:36.571964] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.267 [2024-04-24 19:48:36.571972] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.571978] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf02e0) on tqpair=0x1c90d00 00:17:55.267 [2024-04-24 19:48:36.571997] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.572007] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.572013] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c90d00) 00:17:55.267 [2024-04-24 19:48:36.572028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.267 [2024-04-24 19:48:36.572049] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf02e0, cid 3, qid 0 00:17:55.267 [2024-04-24 19:48:36.572196] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.267 [2024-04-24 19:48:36.572208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.267 [2024-04-24 19:48:36.572215] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.572221] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf02e0) on tqpair=0x1c90d00 00:17:55.267 [2024-04-24 19:48:36.572238] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.572248] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.572255] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c90d00) 00:17:55.267 [2024-04-24 19:48:36.572265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.267 [2024-04-24 19:48:36.572285] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf02e0, cid 3, qid 0 00:17:55.267 [2024-04-24 19:48:36.572428] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.267 [2024-04-24 19:48:36.572443] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.267 [2024-04-24 19:48:36.572449] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.572456] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf02e0) on tqpair=0x1c90d00 00:17:55.267 [2024-04-24 19:48:36.572474] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.572484] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.267 [2024-04-24 19:48:36.572490] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c90d00) 00:17:55.267 [2024-04-24 19:48:36.572501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.267 [2024-04-24 19:48:36.572522] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf02e0, cid 3, qid 0 00:17:55.267 [2024-04-24 19:48:36.576641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.267 [2024-04-24 19:48:36.576657] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.268 [2024-04-24 19:48:36.576664] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.576671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf02e0) on tqpair=0x1c90d00 00:17:55.268 [2024-04-24 19:48:36.576695] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.576719] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.576726] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c90d00) 00:17:55.268 [2024-04-24 19:48:36.576737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.268 [2024-04-24 19:48:36.576760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf02e0, cid 3, qid 0 00:17:55.268 [2024-04-24 19:48:36.576962] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.268 [2024-04-24 19:48:36.576974] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.268 [2024-04-24 19:48:36.576981] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.576988] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cf02e0) on tqpair=0x1c90d00 00:17:55.268 [2024-04-24 19:48:36.577002] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:55.268 00:17:55.268 19:48:36 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:55.268 [2024-04-24 19:48:36.612639] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:17:55.268 [2024-04-24 19:48:36.612686] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734545 ] 00:17:55.268 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.268 [2024-04-24 19:48:36.647797] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:55.268 [2024-04-24 19:48:36.647852] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:55.268 [2024-04-24 19:48:36.647862] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:55.268 [2024-04-24 19:48:36.647877] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:55.268 [2024-04-24 19:48:36.647889] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:55.268 [2024-04-24 19:48:36.648197] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:55.268 [2024-04-24 19:48:36.648238] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22eed00 0 00:17:55.268 [2024-04-24 19:48:36.658646] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:55.268 [2024-04-24 19:48:36.658665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:55.268 [2024-04-24 19:48:36.658674] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:55.268 [2024-04-24 19:48:36.658680] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:55.268 [2024-04-24 19:48:36.658719] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.658731] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.658737] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.268 [2024-04-24 19:48:36.658753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:55.268 [2024-04-24 19:48:36.658779] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.268 [2024-04-24 19:48:36.665641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.268 [2024-04-24 19:48:36.665660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.268 [2024-04-24 19:48:36.665668] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.665675] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234dec0) on tqpair=0x22eed00 00:17:55.268 [2024-04-24 19:48:36.665695] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:55.268 [2024-04-24 19:48:36.665707] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:55.268 [2024-04-24 19:48:36.665716] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:55.268 [2024-04-24 19:48:36.665734] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.665743] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.665750] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.268 [2024-04-24 19:48:36.665762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.268 [2024-04-24 19:48:36.665786] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.268 [2024-04-24 19:48:36.665974] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.268 [2024-04-24 19:48:36.665986] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.268 [2024-04-24 19:48:36.665998] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666006] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234dec0) on tqpair=0x22eed00 00:17:55.268 [2024-04-24 19:48:36.666016] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:55.268 [2024-04-24 19:48:36.666029] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:55.268 [2024-04-24 19:48:36.666042] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666050] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666056] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.268 [2024-04-24 19:48:36.666067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.268 [2024-04-24 19:48:36.666089] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.268 [2024-04-24 19:48:36.666270] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.268 [2024-04-24 19:48:36.666282] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.268 [2024-04-24 19:48:36.666289] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666296] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234dec0) on tqpair=0x22eed00 00:17:55.268 [2024-04-24 19:48:36.666306] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:55.268 [2024-04-24 19:48:36.666320] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:55.268 [2024-04-24 19:48:36.666331] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666339] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666345] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.268 [2024-04-24 19:48:36.666356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.268 [2024-04-24 19:48:36.666377] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.268 [2024-04-24 19:48:36.666526] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.268 [2024-04-24 19:48:36.666541] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.268 [2024-04-24 19:48:36.666548] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666555] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234dec0) on tqpair=0x22eed00 00:17:55.268 [2024-04-24 19:48:36.666565] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:55.268 [2024-04-24 19:48:36.666582] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666591] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666597] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.268 [2024-04-24 19:48:36.666608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.268 [2024-04-24 19:48:36.666635] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.268 [2024-04-24 19:48:36.666780] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.268 [2024-04-24 19:48:36.666795] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.268 [2024-04-24 19:48:36.666802] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666809] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234dec0) on tqpair=0x22eed00 00:17:55.268 [2024-04-24 19:48:36.666822] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:55.268 [2024-04-24 19:48:36.666831] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:55.268 [2024-04-24 19:48:36.666844] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:55.268 [2024-04-24 19:48:36.666955] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:55.268 [2024-04-24 19:48:36.666962] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:55.268 [2024-04-24 19:48:36.666975] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666983] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.666989] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.268 [2024-04-24 19:48:36.667015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.268 [2024-04-24 19:48:36.667038] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.268 [2024-04-24 19:48:36.667247] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.268 [2024-04-24 19:48:36.667262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.268 [2024-04-24 19:48:36.667269] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.667276] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234dec0) on tqpair=0x22eed00 00:17:55.268 [2024-04-24 19:48:36.667286] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:55.268 [2024-04-24 19:48:36.667303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.667312] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.268 [2024-04-24 19:48:36.667319] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.268 [2024-04-24 19:48:36.667329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.269 [2024-04-24 19:48:36.667351] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.269 [2024-04-24 19:48:36.667523] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.269 [2024-04-24 19:48:36.667538] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.269 [2024-04-24 19:48:36.667545] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.667551] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234dec0) on tqpair=0x22eed00 00:17:55.269 [2024-04-24 19:48:36.667561] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:55.269 [2024-04-24 19:48:36.667569] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.667583] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:55.269 [2024-04-24 19:48:36.667598] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.667614] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.667622] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.269 [2024-04-24 19:48:36.667642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.269 [2024-04-24 19:48:36.667668] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.269 [2024-04-24 19:48:36.667886] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.269 [2024-04-24 19:48:36.667902] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.269 [2024-04-24 19:48:36.667909] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.667915] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22eed00): datao=0, datal=4096, cccid=0 00:17:55.269 [2024-04-24 19:48:36.667923] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234dec0) on tqpair(0x22eed00): expected_datao=0, payload_size=4096 00:17:55.269 [2024-04-24 19:48:36.667930] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.667964] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.667973] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668123] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.269 [2024-04-24 19:48:36.668135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.269 [2024-04-24 19:48:36.668141] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668148] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234dec0) on tqpair=0x22eed00 00:17:55.269 [2024-04-24 19:48:36.668161] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:55.269 [2024-04-24 19:48:36.668170] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:55.269 [2024-04-24 19:48:36.668178] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:55.269 [2024-04-24 19:48:36.668186] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:55.269 [2024-04-24 19:48:36.668193] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:55.269 [2024-04-24 19:48:36.668201] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.668215] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.668227] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668234] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668241] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.269 [2024-04-24 19:48:36.668252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.269 [2024-04-24 19:48:36.668273] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.269 [2024-04-24 19:48:36.668459] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.269 [2024-04-24 19:48:36.668475] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.269 [2024-04-24 19:48:36.668482] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668488] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234dec0) on tqpair=0x22eed00 00:17:55.269 [2024-04-24 19:48:36.668500] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668507] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668514] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22eed00) 00:17:55.269 [2024-04-24 19:48:36.668524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.269 [2024-04-24 19:48:36.668534] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668541] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668547] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22eed00) 00:17:55.269 [2024-04-24 19:48:36.668560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.269 [2024-04-24 19:48:36.668570] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668577] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668584] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22eed00) 00:17:55.269 [2024-04-24 19:48:36.668592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.269 [2024-04-24 19:48:36.668602] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668609] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668615] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.269 [2024-04-24 19:48:36.668624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.269 [2024-04-24 19:48:36.668642] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.668663] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.668676] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668683] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22eed00) 00:17:55.269 [2024-04-24 19:48:36.668694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.269 [2024-04-24 19:48:36.668717] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234dec0, cid 0, qid 0 00:17:55.269 [2024-04-24 19:48:36.668728] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e020, cid 1, qid 0 00:17:55.269 [2024-04-24 19:48:36.668736] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e180, cid 2, qid 0 00:17:55.269 [2024-04-24 19:48:36.668744] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.269 [2024-04-24 19:48:36.668751] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e440, cid 4, qid 0 00:17:55.269 [2024-04-24 19:48:36.668964] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.269 [2024-04-24 19:48:36.668976] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.269 [2024-04-24 19:48:36.668982] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.668989] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e440) on tqpair=0x22eed00 00:17:55.269 [2024-04-24 19:48:36.668999] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:55.269 [2024-04-24 19:48:36.669008] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.669027] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.669041] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.669051] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.669059] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.669065] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22eed00) 00:17:55.269 [2024-04-24 19:48:36.669077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.269 [2024-04-24 19:48:36.669101] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e440, cid 4, qid 0 00:17:55.269 [2024-04-24 19:48:36.669284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.269 [2024-04-24 19:48:36.669296] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.269 [2024-04-24 19:48:36.669303] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.669310] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e440) on tqpair=0x22eed00 00:17:55.269 [2024-04-24 19:48:36.669366] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.669386] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:55.269 [2024-04-24 19:48:36.669402] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.669409] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22eed00) 00:17:55.269 [2024-04-24 19:48:36.669420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.269 [2024-04-24 19:48:36.669442] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e440, cid 4, qid 0 00:17:55.269 [2024-04-24 19:48:36.673644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.269 [2024-04-24 19:48:36.673661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.269 [2024-04-24 19:48:36.673667] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.673674] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22eed00): datao=0, datal=4096, cccid=4 00:17:55.269 [2024-04-24 19:48:36.673681] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234e440) on tqpair(0x22eed00): expected_datao=0, payload_size=4096 00:17:55.269 [2024-04-24 19:48:36.673688] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.673698] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.673705] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.269 [2024-04-24 19:48:36.713640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.269 [2024-04-24 19:48:36.713658] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.270 [2024-04-24 19:48:36.713666] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.713673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e440) on tqpair=0x22eed00 00:17:55.270 [2024-04-24 19:48:36.713693] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:55.270 [2024-04-24 19:48:36.713716] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:55.270 [2024-04-24 19:48:36.713737] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:55.270 [2024-04-24 19:48:36.713751] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.713759] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22eed00) 00:17:55.270 [2024-04-24 19:48:36.713770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.270 [2024-04-24 19:48:36.713794] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e440, cid 4, qid 0 00:17:55.270 [2024-04-24 19:48:36.714001] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.270 [2024-04-24 19:48:36.714016] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.270 [2024-04-24 19:48:36.714023] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.714030] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22eed00): datao=0, datal=4096, cccid=4 00:17:55.270 [2024-04-24 19:48:36.714042] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234e440) on tqpair(0x22eed00): expected_datao=0, payload_size=4096 00:17:55.270 [2024-04-24 19:48:36.714050] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.714073] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.714082] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.754803] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.270 [2024-04-24 19:48:36.754822] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.270 [2024-04-24 19:48:36.754829] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.754836] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e440) on tqpair=0x22eed00 00:17:55.270 [2024-04-24 19:48:36.754862] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:55.270 [2024-04-24 19:48:36.754882] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:55.270 [2024-04-24 19:48:36.754897] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.754905] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22eed00) 00:17:55.270 [2024-04-24 19:48:36.754917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.270 [2024-04-24 19:48:36.754940] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e440, cid 4, qid 0 00:17:55.270 [2024-04-24 19:48:36.755105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.270 [2024-04-24 19:48:36.755118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.270 [2024-04-24 19:48:36.755125] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.755131] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22eed00): datao=0, datal=4096, cccid=4 00:17:55.270 [2024-04-24 19:48:36.755139] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234e440) on tqpair(0x22eed00): expected_datao=0, payload_size=4096 00:17:55.270 [2024-04-24 19:48:36.755146] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.755181] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.270 [2024-04-24 19:48:36.755190] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.795807] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.531 [2024-04-24 19:48:36.795825] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.531 [2024-04-24 19:48:36.795833] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.795840] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e440) on tqpair=0x22eed00 00:17:55.531 [2024-04-24 19:48:36.795858] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:55.531 [2024-04-24 19:48:36.795873] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:55.531 [2024-04-24 19:48:36.795893] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:55.531 [2024-04-24 19:48:36.795905] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:55.531 [2024-04-24 19:48:36.795914] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:55.531 [2024-04-24 19:48:36.795923] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:55.531 [2024-04-24 19:48:36.795931] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:55.531 [2024-04-24 19:48:36.795944] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:55.531 [2024-04-24 19:48:36.795964] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.795973] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22eed00) 00:17:55.531 [2024-04-24 19:48:36.795985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.531 [2024-04-24 19:48:36.795996] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.796004] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.796010] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22eed00) 00:17:55.531 [2024-04-24 19:48:36.796019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.531 [2024-04-24 19:48:36.796046] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e440, cid 4, qid 0 00:17:55.531 [2024-04-24 19:48:36.796058] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e5a0, cid 5, qid 0 00:17:55.531 [2024-04-24 19:48:36.796214] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.531 [2024-04-24 19:48:36.796226] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.531 [2024-04-24 19:48:36.796233] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.796240] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e440) on tqpair=0x22eed00 00:17:55.531 [2024-04-24 19:48:36.796252] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.531 [2024-04-24 19:48:36.796262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.531 [2024-04-24 19:48:36.796268] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.796275] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e5a0) on tqpair=0x22eed00 00:17:55.531 [2024-04-24 19:48:36.796291] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.796300] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22eed00) 00:17:55.531 [2024-04-24 19:48:36.796311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.531 [2024-04-24 19:48:36.796332] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e5a0, cid 5, qid 0 00:17:55.531 [2024-04-24 19:48:36.796515] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.531 [2024-04-24 19:48:36.796530] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.531 [2024-04-24 19:48:36.796537] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.796544] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e5a0) on tqpair=0x22eed00 00:17:55.531 [2024-04-24 19:48:36.796561] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.796570] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22eed00) 00:17:55.531 [2024-04-24 19:48:36.796581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.531 [2024-04-24 19:48:36.796601] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e5a0, cid 5, qid 0 00:17:55.531 [2024-04-24 19:48:36.796758] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.531 [2024-04-24 19:48:36.796771] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.531 [2024-04-24 19:48:36.796778] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.796785] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e5a0) on tqpair=0x22eed00 00:17:55.531 [2024-04-24 19:48:36.796801] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.531 [2024-04-24 19:48:36.796814] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22eed00) 00:17:55.531 [2024-04-24 19:48:36.796826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.531 [2024-04-24 19:48:36.796847] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e5a0, cid 5, qid 0 00:17:55.531 [2024-04-24 19:48:36.797004] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.531 [2024-04-24 19:48:36.797020] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.531 [2024-04-24 19:48:36.797027] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.797033] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e5a0) on tqpair=0x22eed00 00:17:55.532 [2024-04-24 19:48:36.797055] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.797065] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22eed00) 00:17:55.532 [2024-04-24 19:48:36.797076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.532 [2024-04-24 19:48:36.797088] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.797096] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22eed00) 00:17:55.532 [2024-04-24 19:48:36.797105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.532 [2024-04-24 19:48:36.797117] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.797124] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x22eed00) 00:17:55.532 [2024-04-24 19:48:36.797134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.532 [2024-04-24 19:48:36.797146] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.797153] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22eed00) 00:17:55.532 [2024-04-24 19:48:36.797162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.532 [2024-04-24 19:48:36.797199] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e5a0, cid 5, qid 0 00:17:55.532 [2024-04-24 19:48:36.797211] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e440, cid 4, qid 0 00:17:55.532 [2024-04-24 19:48:36.797218] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e700, cid 6, qid 0 00:17:55.532 [2024-04-24 19:48:36.797226] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e860, cid 7, qid 0 00:17:55.532 [2024-04-24 19:48:36.797542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.532 [2024-04-24 19:48:36.797555] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.532 [2024-04-24 19:48:36.797562] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.797568] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22eed00): datao=0, datal=8192, cccid=5 00:17:55.532 [2024-04-24 19:48:36.797576] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234e5a0) on tqpair(0x22eed00): expected_datao=0, payload_size=8192 00:17:55.532 [2024-04-24 19:48:36.797583] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.797622] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801643] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801656] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.532 [2024-04-24 19:48:36.801665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.532 [2024-04-24 19:48:36.801675] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801682] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22eed00): datao=0, datal=512, cccid=4 00:17:55.532 [2024-04-24 19:48:36.801689] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234e440) on tqpair(0x22eed00): expected_datao=0, payload_size=512 00:17:55.532 [2024-04-24 19:48:36.801696] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801706] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801713] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801721] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.532 [2024-04-24 19:48:36.801729] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.532 [2024-04-24 19:48:36.801735] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801741] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22eed00): datao=0, datal=512, cccid=6 00:17:55.532 [2024-04-24 19:48:36.801749] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234e700) on tqpair(0x22eed00): expected_datao=0, payload_size=512 00:17:55.532 [2024-04-24 19:48:36.801756] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801764] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801771] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801779] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:55.532 [2024-04-24 19:48:36.801787] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:55.532 [2024-04-24 19:48:36.801793] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801799] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22eed00): datao=0, datal=4096, cccid=7 00:17:55.532 [2024-04-24 19:48:36.801807] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x234e860) on tqpair(0x22eed00): expected_datao=0, payload_size=4096 00:17:55.532 [2024-04-24 19:48:36.801814] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801823] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801829] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801841] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.532 [2024-04-24 19:48:36.801850] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.532 [2024-04-24 19:48:36.801856] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801863] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e5a0) on tqpair=0x22eed00 00:17:55.532 [2024-04-24 19:48:36.801884] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.532 [2024-04-24 19:48:36.801895] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.532 [2024-04-24 19:48:36.801902] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801923] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e440) on tqpair=0x22eed00 00:17:55.532 [2024-04-24 19:48:36.801938] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.532 [2024-04-24 19:48:36.801948] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.532 [2024-04-24 19:48:36.801954] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801961] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e700) on tqpair=0x22eed00 00:17:55.532 [2024-04-24 19:48:36.801972] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.532 [2024-04-24 19:48:36.801981] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.532 [2024-04-24 19:48:36.801987] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.532 [2024-04-24 19:48:36.801993] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e860) on tqpair=0x22eed00 00:17:55.532 ===================================================== 00:17:55.532 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.532 ===================================================== 00:17:55.532 Controller Capabilities/Features 00:17:55.532 ================================ 00:17:55.532 Vendor ID: 8086 00:17:55.532 Subsystem Vendor ID: 8086 00:17:55.532 Serial Number: SPDK00000000000001 00:17:55.532 Model Number: SPDK bdev Controller 00:17:55.532 Firmware Version: 24.05 00:17:55.532 Recommended Arb Burst: 6 00:17:55.532 IEEE OUI Identifier: e4 d2 5c 00:17:55.532 Multi-path I/O 00:17:55.532 May have multiple subsystem ports: Yes 00:17:55.532 May have multiple controllers: Yes 00:17:55.532 Associated with SR-IOV VF: No 00:17:55.532 Max Data Transfer Size: 131072 00:17:55.532 Max Number of Namespaces: 32 00:17:55.532 Max Number of I/O Queues: 127 00:17:55.532 NVMe Specification Version (VS): 1.3 00:17:55.532 NVMe Specification Version (Identify): 1.3 00:17:55.532 Maximum Queue Entries: 128 00:17:55.532 Contiguous Queues Required: Yes 00:17:55.532 Arbitration Mechanisms Supported 00:17:55.532 Weighted Round Robin: Not Supported 00:17:55.532 Vendor Specific: Not Supported 00:17:55.532 Reset Timeout: 15000 ms 00:17:55.532 Doorbell Stride: 4 bytes 00:17:55.532 NVM Subsystem Reset: Not Supported 00:17:55.532 Command Sets Supported 00:17:55.532 NVM Command Set: Supported 00:17:55.532 Boot Partition: Not Supported 00:17:55.532 Memory Page Size Minimum: 4096 bytes 00:17:55.532 Memory Page Size Maximum: 4096 bytes 00:17:55.532 Persistent Memory Region: Not Supported 00:17:55.532 Optional Asynchronous Events Supported 00:17:55.532 Namespace Attribute Notices: Supported 00:17:55.532 Firmware Activation Notices: Not Supported 00:17:55.532 ANA Change Notices: Not Supported 00:17:55.532 PLE Aggregate Log Change Notices: Not Supported 00:17:55.532 LBA Status Info Alert Notices: Not Supported 00:17:55.532 EGE Aggregate Log Change Notices: Not Supported 00:17:55.532 Normal NVM Subsystem Shutdown event: Not Supported 00:17:55.532 Zone Descriptor Change Notices: Not Supported 00:17:55.532 Discovery Log Change Notices: Not Supported 00:17:55.532 Controller Attributes 00:17:55.532 128-bit Host Identifier: Supported 00:17:55.532 Non-Operational Permissive Mode: Not Supported 00:17:55.532 NVM Sets: Not Supported 00:17:55.532 Read Recovery Levels: Not Supported 00:17:55.532 Endurance Groups: Not Supported 00:17:55.532 Predictable Latency Mode: Not Supported 00:17:55.532 Traffic Based Keep ALive: Not Supported 00:17:55.532 Namespace Granularity: Not Supported 00:17:55.532 SQ Associations: Not Supported 00:17:55.532 UUID List: Not Supported 00:17:55.532 Multi-Domain Subsystem: Not Supported 00:17:55.532 Fixed Capacity Management: Not Supported 00:17:55.532 Variable Capacity Management: Not Supported 00:17:55.532 Delete Endurance Group: Not Supported 00:17:55.532 Delete NVM Set: Not Supported 00:17:55.532 Extended LBA Formats Supported: Not Supported 00:17:55.532 Flexible Data Placement Supported: Not Supported 00:17:55.532 00:17:55.532 Controller Memory Buffer Support 00:17:55.532 ================================ 00:17:55.532 Supported: No 00:17:55.532 00:17:55.532 Persistent Memory Region Support 00:17:55.532 ================================ 00:17:55.532 Supported: No 00:17:55.532 00:17:55.532 Admin Command Set Attributes 00:17:55.533 ============================ 00:17:55.533 Security Send/Receive: Not Supported 00:17:55.533 Format NVM: Not Supported 00:17:55.533 Firmware Activate/Download: Not Supported 00:17:55.533 Namespace Management: Not Supported 00:17:55.533 Device Self-Test: Not Supported 00:17:55.533 Directives: Not Supported 00:17:55.533 NVMe-MI: Not Supported 00:17:55.533 Virtualization Management: Not Supported 00:17:55.533 Doorbell Buffer Config: Not Supported 00:17:55.533 Get LBA Status Capability: Not Supported 00:17:55.533 Command & Feature Lockdown Capability: Not Supported 00:17:55.533 Abort Command Limit: 4 00:17:55.533 Async Event Request Limit: 4 00:17:55.533 Number of Firmware Slots: N/A 00:17:55.533 Firmware Slot 1 Read-Only: N/A 00:17:55.533 Firmware Activation Without Reset: N/A 00:17:55.533 Multiple Update Detection Support: N/A 00:17:55.533 Firmware Update Granularity: No Information Provided 00:17:55.533 Per-Namespace SMART Log: No 00:17:55.533 Asymmetric Namespace Access Log Page: Not Supported 00:17:55.533 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:55.533 Command Effects Log Page: Supported 00:17:55.533 Get Log Page Extended Data: Supported 00:17:55.533 Telemetry Log Pages: Not Supported 00:17:55.533 Persistent Event Log Pages: Not Supported 00:17:55.533 Supported Log Pages Log Page: May Support 00:17:55.533 Commands Supported & Effects Log Page: Not Supported 00:17:55.533 Feature Identifiers & Effects Log Page:May Support 00:17:55.533 NVMe-MI Commands & Effects Log Page: May Support 00:17:55.533 Data Area 4 for Telemetry Log: Not Supported 00:17:55.533 Error Log Page Entries Supported: 128 00:17:55.533 Keep Alive: Supported 00:17:55.533 Keep Alive Granularity: 10000 ms 00:17:55.533 00:17:55.533 NVM Command Set Attributes 00:17:55.533 ========================== 00:17:55.533 Submission Queue Entry Size 00:17:55.533 Max: 64 00:17:55.533 Min: 64 00:17:55.533 Completion Queue Entry Size 00:17:55.533 Max: 16 00:17:55.533 Min: 16 00:17:55.533 Number of Namespaces: 32 00:17:55.533 Compare Command: Supported 00:17:55.533 Write Uncorrectable Command: Not Supported 00:17:55.533 Dataset Management Command: Supported 00:17:55.533 Write Zeroes Command: Supported 00:17:55.533 Set Features Save Field: Not Supported 00:17:55.533 Reservations: Supported 00:17:55.533 Timestamp: Not Supported 00:17:55.533 Copy: Supported 00:17:55.533 Volatile Write Cache: Present 00:17:55.533 Atomic Write Unit (Normal): 1 00:17:55.533 Atomic Write Unit (PFail): 1 00:17:55.533 Atomic Compare & Write Unit: 1 00:17:55.533 Fused Compare & Write: Supported 00:17:55.533 Scatter-Gather List 00:17:55.533 SGL Command Set: Supported 00:17:55.533 SGL Keyed: Supported 00:17:55.533 SGL Bit Bucket Descriptor: Not Supported 00:17:55.533 SGL Metadata Pointer: Not Supported 00:17:55.533 Oversized SGL: Not Supported 00:17:55.533 SGL Metadata Address: Not Supported 00:17:55.533 SGL Offset: Supported 00:17:55.533 Transport SGL Data Block: Not Supported 00:17:55.533 Replay Protected Memory Block: Not Supported 00:17:55.533 00:17:55.533 Firmware Slot Information 00:17:55.533 ========================= 00:17:55.533 Active slot: 1 00:17:55.533 Slot 1 Firmware Revision: 24.05 00:17:55.533 00:17:55.533 00:17:55.533 Commands Supported and Effects 00:17:55.533 ============================== 00:17:55.533 Admin Commands 00:17:55.533 -------------- 00:17:55.533 Get Log Page (02h): Supported 00:17:55.533 Identify (06h): Supported 00:17:55.533 Abort (08h): Supported 00:17:55.533 Set Features (09h): Supported 00:17:55.533 Get Features (0Ah): Supported 00:17:55.533 Asynchronous Event Request (0Ch): Supported 00:17:55.533 Keep Alive (18h): Supported 00:17:55.533 I/O Commands 00:17:55.533 ------------ 00:17:55.533 Flush (00h): Supported LBA-Change 00:17:55.533 Write (01h): Supported LBA-Change 00:17:55.533 Read (02h): Supported 00:17:55.533 Compare (05h): Supported 00:17:55.533 Write Zeroes (08h): Supported LBA-Change 00:17:55.533 Dataset Management (09h): Supported LBA-Change 00:17:55.533 Copy (19h): Supported LBA-Change 00:17:55.533 Unknown (79h): Supported LBA-Change 00:17:55.533 Unknown (7Ah): Supported 00:17:55.533 00:17:55.533 Error Log 00:17:55.533 ========= 00:17:55.533 00:17:55.533 Arbitration 00:17:55.533 =========== 00:17:55.533 Arbitration Burst: 1 00:17:55.533 00:17:55.533 Power Management 00:17:55.533 ================ 00:17:55.533 Number of Power States: 1 00:17:55.533 Current Power State: Power State #0 00:17:55.533 Power State #0: 00:17:55.533 Max Power: 0.00 W 00:17:55.533 Non-Operational State: Operational 00:17:55.533 Entry Latency: Not Reported 00:17:55.533 Exit Latency: Not Reported 00:17:55.533 Relative Read Throughput: 0 00:17:55.533 Relative Read Latency: 0 00:17:55.533 Relative Write Throughput: 0 00:17:55.533 Relative Write Latency: 0 00:17:55.533 Idle Power: Not Reported 00:17:55.533 Active Power: Not Reported 00:17:55.533 Non-Operational Permissive Mode: Not Supported 00:17:55.533 00:17:55.533 Health Information 00:17:55.533 ================== 00:17:55.533 Critical Warnings: 00:17:55.533 Available Spare Space: OK 00:17:55.533 Temperature: OK 00:17:55.533 Device Reliability: OK 00:17:55.533 Read Only: No 00:17:55.533 Volatile Memory Backup: OK 00:17:55.533 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:55.533 Temperature Threshold: [2024-04-24 19:48:36.802132] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.802146] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22eed00) 00:17:55.533 [2024-04-24 19:48:36.802158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.533 [2024-04-24 19:48:36.802182] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e860, cid 7, qid 0 00:17:55.533 [2024-04-24 19:48:36.802394] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.533 [2024-04-24 19:48:36.802410] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.533 [2024-04-24 19:48:36.802417] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.802424] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e860) on tqpair=0x22eed00 00:17:55.533 [2024-04-24 19:48:36.802469] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:55.533 [2024-04-24 19:48:36.802490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.533 [2024-04-24 19:48:36.802503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.533 [2024-04-24 19:48:36.802512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.533 [2024-04-24 19:48:36.802522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.533 [2024-04-24 19:48:36.802535] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.802543] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.802549] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.533 [2024-04-24 19:48:36.802560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.533 [2024-04-24 19:48:36.802583] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.533 [2024-04-24 19:48:36.802768] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.533 [2024-04-24 19:48:36.802784] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.533 [2024-04-24 19:48:36.802791] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.802797] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.533 [2024-04-24 19:48:36.802810] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.802818] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.802825] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.533 [2024-04-24 19:48:36.802835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.533 [2024-04-24 19:48:36.802862] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.533 [2024-04-24 19:48:36.803025] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.533 [2024-04-24 19:48:36.803036] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.533 [2024-04-24 19:48:36.803043] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.803050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.533 [2024-04-24 19:48:36.803060] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:55.533 [2024-04-24 19:48:36.803067] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:55.533 [2024-04-24 19:48:36.803082] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.803091] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.533 [2024-04-24 19:48:36.803102] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.533 [2024-04-24 19:48:36.803113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.533 [2024-04-24 19:48:36.803134] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.533 [2024-04-24 19:48:36.803312] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.803328] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.803334] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.803341] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.803358] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.803368] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.803374] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.803385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.803406] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.803554] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.803566] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.803573] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.803580] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.803597] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.803606] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.803613] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.803623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.803652] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.803799] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.803815] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.803822] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.803828] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.803846] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.803855] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.803862] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.803872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.803893] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.804041] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.804053] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.804060] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804067] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.804084] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804093] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804100] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.804114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.804135] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.804280] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.804292] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.804299] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804305] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.804322] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804331] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804338] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.804348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.804369] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.804513] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.804525] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.804532] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804538] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.804555] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804564] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804571] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.804582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.804602] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.804762] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.804778] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.804785] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804791] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.804809] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804818] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.804825] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.804835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.804857] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.805009] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.805021] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.805028] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.805034] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.805051] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.805060] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.805067] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.805077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.805102] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.805247] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.805262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.805269] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.805276] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.805293] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.805303] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.805309] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.805320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.805341] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.805489] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.805504] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.805511] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.805518] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.805535] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.805545] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.805551] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.805562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.805583] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.809644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.809661] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.809668] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.809675] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.809693] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.809703] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.809709] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22eed00) 00:17:55.534 [2024-04-24 19:48:36.809720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.534 [2024-04-24 19:48:36.809742] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x234e2e0, cid 3, qid 0 00:17:55.534 [2024-04-24 19:48:36.809925] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:55.534 [2024-04-24 19:48:36.809937] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:55.534 [2024-04-24 19:48:36.809943] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:55.534 [2024-04-24 19:48:36.809950] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x234e2e0) on tqpair=0x22eed00 00:17:55.534 [2024-04-24 19:48:36.809964] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:17:55.534 0 Kelvin (-273 Celsius) 00:17:55.534 Available Spare: 0% 00:17:55.534 Available Spare Threshold: 0% 00:17:55.534 Life Percentage Used: 0% 00:17:55.534 Data Units Read: 0 00:17:55.534 Data Units Written: 0 00:17:55.534 Host Read Commands: 0 00:17:55.534 Host Write Commands: 0 00:17:55.534 Controller Busy Time: 0 minutes 00:17:55.534 Power Cycles: 0 00:17:55.535 Power On Hours: 0 hours 00:17:55.535 Unsafe Shutdowns: 0 00:17:55.535 Unrecoverable Media Errors: 0 00:17:55.535 Lifetime Error Log Entries: 0 00:17:55.535 Warning Temperature Time: 0 minutes 00:17:55.535 Critical Temperature Time: 0 minutes 00:17:55.535 00:17:55.535 Number of Queues 00:17:55.535 ================ 00:17:55.535 Number of I/O Submission Queues: 127 00:17:55.535 Number of I/O Completion Queues: 127 00:17:55.535 00:17:55.535 Active Namespaces 00:17:55.535 ================= 00:17:55.535 Namespace ID:1 00:17:55.535 Error Recovery Timeout: Unlimited 00:17:55.535 Command Set Identifier: NVM (00h) 00:17:55.535 Deallocate: Supported 00:17:55.535 Deallocated/Unwritten Error: Not Supported 00:17:55.535 Deallocated Read Value: Unknown 00:17:55.535 Deallocate in Write Zeroes: Not Supported 00:17:55.535 Deallocated Guard Field: 0xFFFF 00:17:55.535 Flush: Supported 00:17:55.535 Reservation: Supported 00:17:55.535 Namespace Sharing Capabilities: Multiple Controllers 00:17:55.535 Size (in LBAs): 131072 (0GiB) 00:17:55.535 Capacity (in LBAs): 131072 (0GiB) 00:17:55.535 Utilization (in LBAs): 131072 (0GiB) 00:17:55.535 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:55.535 EUI64: ABCDEF0123456789 00:17:55.535 UUID: 228d6a18-85cf-4f6b-966b-71a499bd81bc 00:17:55.535 Thin Provisioning: Not Supported 00:17:55.535 Per-NS Atomic Units: Yes 00:17:55.535 Atomic Boundary Size (Normal): 0 00:17:55.535 Atomic Boundary Size (PFail): 0 00:17:55.535 Atomic Boundary Offset: 0 00:17:55.535 Maximum Single Source Range Length: 65535 00:17:55.535 Maximum Copy Length: 65535 00:17:55.535 Maximum Source Range Count: 1 00:17:55.535 NGUID/EUI64 Never Reused: No 00:17:55.535 Namespace Write Protected: No 00:17:55.535 Number of LBA Formats: 1 00:17:55.535 Current LBA Format: LBA Format #00 00:17:55.535 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:55.535 00:17:55.535 19:48:36 -- host/identify.sh@51 -- # sync 00:17:55.535 19:48:36 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.535 19:48:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:55.535 19:48:36 -- common/autotest_common.sh@10 -- # set +x 00:17:55.535 19:48:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:55.535 19:48:36 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:55.535 19:48:36 -- host/identify.sh@56 -- # nvmftestfini 00:17:55.535 19:48:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:55.535 19:48:36 -- nvmf/common.sh@117 -- # sync 00:17:55.535 19:48:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.535 19:48:36 -- nvmf/common.sh@120 -- # set +e 00:17:55.535 19:48:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.535 19:48:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.535 rmmod nvme_tcp 00:17:55.535 rmmod nvme_fabrics 00:17:55.535 rmmod nvme_keyring 00:17:55.535 19:48:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.535 19:48:36 -- nvmf/common.sh@124 -- # set -e 00:17:55.535 19:48:36 -- nvmf/common.sh@125 -- # return 0 00:17:55.535 19:48:36 -- nvmf/common.sh@478 -- # '[' -n 1734391 ']' 00:17:55.535 19:48:36 -- nvmf/common.sh@479 -- # killprocess 1734391 00:17:55.535 19:48:36 -- common/autotest_common.sh@936 -- # '[' -z 1734391 ']' 00:17:55.535 19:48:36 -- common/autotest_common.sh@940 -- # kill -0 1734391 00:17:55.535 19:48:36 -- common/autotest_common.sh@941 -- # uname 00:17:55.535 19:48:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:55.535 19:48:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1734391 00:17:55.535 19:48:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:55.535 19:48:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:55.535 19:48:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1734391' 00:17:55.535 killing process with pid 1734391 00:17:55.535 19:48:36 -- common/autotest_common.sh@955 -- # kill 1734391 00:17:55.535 [2024-04-24 19:48:36.910788] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:55.535 19:48:36 -- common/autotest_common.sh@960 -- # wait 1734391 00:17:55.794 19:48:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:55.794 19:48:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:55.794 19:48:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:55.794 19:48:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.794 19:48:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.794 19:48:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.794 19:48:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.794 19:48:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.328 19:48:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.328 00:17:58.328 real 0m5.524s 00:17:58.328 user 0m4.746s 00:17:58.328 sys 0m1.898s 00:17:58.328 19:48:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:58.328 19:48:39 -- common/autotest_common.sh@10 -- # set +x 00:17:58.328 ************************************ 00:17:58.328 END TEST nvmf_identify 00:17:58.328 ************************************ 00:17:58.328 19:48:39 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:58.328 19:48:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:58.328 19:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.328 19:48:39 -- common/autotest_common.sh@10 -- # set +x 00:17:58.328 ************************************ 00:17:58.328 START TEST nvmf_perf 00:17:58.328 ************************************ 00:17:58.328 19:48:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:58.328 * Looking for test storage... 00:17:58.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:58.328 19:48:39 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.328 19:48:39 -- nvmf/common.sh@7 -- # uname -s 00:17:58.328 19:48:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.328 19:48:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.328 19:48:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.328 19:48:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.328 19:48:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.328 19:48:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.328 19:48:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.328 19:48:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.328 19:48:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.328 19:48:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.328 19:48:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.328 19:48:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:58.328 19:48:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.328 19:48:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.328 19:48:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.328 19:48:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.328 19:48:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.328 19:48:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.328 19:48:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.328 19:48:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.329 19:48:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.329 19:48:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.329 19:48:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.329 19:48:39 -- paths/export.sh@5 -- # export PATH 00:17:58.329 19:48:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.329 19:48:39 -- nvmf/common.sh@47 -- # : 0 00:17:58.329 19:48:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.329 19:48:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.329 19:48:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.329 19:48:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.329 19:48:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.329 19:48:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.329 19:48:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.329 19:48:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.329 19:48:39 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:58.329 19:48:39 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:58.329 19:48:39 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.329 19:48:39 -- host/perf.sh@17 -- # nvmftestinit 00:17:58.329 19:48:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:58.329 19:48:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.329 19:48:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:58.329 19:48:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:58.329 19:48:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:58.329 19:48:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.329 19:48:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.329 19:48:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.329 19:48:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:58.329 19:48:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:58.329 19:48:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.329 19:48:39 -- common/autotest_common.sh@10 -- # set +x 00:18:00.231 19:48:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:00.231 19:48:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.231 19:48:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.231 19:48:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.231 19:48:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.231 19:48:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.231 19:48:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.231 19:48:41 -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.231 19:48:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.231 19:48:41 -- nvmf/common.sh@296 -- # e810=() 00:18:00.231 19:48:41 -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.231 19:48:41 -- nvmf/common.sh@297 -- # x722=() 00:18:00.231 19:48:41 -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.231 19:48:41 -- nvmf/common.sh@298 -- # mlx=() 00:18:00.231 19:48:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.231 19:48:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.231 19:48:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.231 19:48:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.231 19:48:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.231 19:48:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.231 19:48:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:00.231 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:00.231 19:48:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.231 19:48:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:00.231 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:00.231 19:48:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.231 19:48:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.231 19:48:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.231 19:48:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.231 19:48:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:00.232 19:48:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.232 19:48:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:00.232 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:00.232 19:48:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.232 19:48:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.232 19:48:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.232 19:48:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:00.232 19:48:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.232 19:48:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:00.232 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:00.232 19:48:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.232 19:48:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:00.232 19:48:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:00.232 19:48:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:00.232 19:48:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:00.232 19:48:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:00.232 19:48:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.232 19:48:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.232 19:48:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.232 19:48:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.232 19:48:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.232 19:48:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.232 19:48:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.232 19:48:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.232 19:48:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.232 19:48:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.232 19:48:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.232 19:48:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.232 19:48:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.232 19:48:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.232 19:48:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.232 19:48:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.232 19:48:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.232 19:48:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.232 19:48:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.232 19:48:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:18:00.232 00:18:00.232 --- 10.0.0.2 ping statistics --- 00:18:00.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.232 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:18:00.232 19:48:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:18:00.232 00:18:00.232 --- 10.0.0.1 ping statistics --- 00:18:00.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.232 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:00.232 19:48:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.232 19:48:41 -- nvmf/common.sh@411 -- # return 0 00:18:00.232 19:48:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:00.232 19:48:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.232 19:48:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:00.232 19:48:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:00.232 19:48:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.232 19:48:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:00.232 19:48:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:00.232 19:48:41 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:00.232 19:48:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:00.232 19:48:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:00.232 19:48:41 -- common/autotest_common.sh@10 -- # set +x 00:18:00.232 19:48:41 -- nvmf/common.sh@470 -- # nvmfpid=1736481 00:18:00.232 19:48:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:00.232 19:48:41 -- nvmf/common.sh@471 -- # waitforlisten 1736481 00:18:00.232 19:48:41 -- common/autotest_common.sh@817 -- # '[' -z 1736481 ']' 00:18:00.232 19:48:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.232 19:48:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.232 19:48:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.232 19:48:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.232 19:48:41 -- common/autotest_common.sh@10 -- # set +x 00:18:00.232 [2024-04-24 19:48:41.563979] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:18:00.232 [2024-04-24 19:48:41.564045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.232 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.232 [2024-04-24 19:48:41.629318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.490 [2024-04-24 19:48:41.746641] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.490 [2024-04-24 19:48:41.746696] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.491 [2024-04-24 19:48:41.746713] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.491 [2024-04-24 19:48:41.746727] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.491 [2024-04-24 19:48:41.746739] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.491 [2024-04-24 19:48:41.746807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.491 [2024-04-24 19:48:41.746858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.491 [2024-04-24 19:48:41.746973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.491 [2024-04-24 19:48:41.746975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.056 19:48:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:01.056 19:48:42 -- common/autotest_common.sh@850 -- # return 0 00:18:01.056 19:48:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:01.056 19:48:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:01.056 19:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:01.056 19:48:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.056 19:48:42 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:18:01.056 19:48:42 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:18:04.328 19:48:45 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:18:04.328 19:48:45 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:04.584 19:48:45 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:18:04.584 19:48:45 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:04.841 19:48:46 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:04.841 19:48:46 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:18:04.841 19:48:46 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:04.841 19:48:46 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:04.841 19:48:46 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.098 [2024-04-24 19:48:46.372267] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.098 19:48:46 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:05.355 19:48:46 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:05.355 19:48:46 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.612 19:48:46 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:05.612 19:48:46 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:05.869 19:48:47 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.869 [2024-04-24 19:48:47.355813] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.869 19:48:47 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:06.127 19:48:47 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:18:06.127 19:48:47 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:18:06.127 19:48:47 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:06.127 19:48:47 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:18:07.498 Initializing NVMe Controllers 00:18:07.498 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:18:07.498 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:18:07.499 Initialization complete. Launching workers. 00:18:07.499 ======================================================== 00:18:07.499 Latency(us) 00:18:07.499 Device Information : IOPS MiB/s Average min max 00:18:07.499 PCIE (0000:88:00.0) NSID 1 from core 0: 86634.58 338.42 368.87 21.83 6276.36 00:18:07.499 ======================================================== 00:18:07.499 Total : 86634.58 338.42 368.87 21.83 6276.36 00:18:07.499 00:18:07.499 19:48:48 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:07.499 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.879 Initializing NVMe Controllers 00:18:08.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:08.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:08.879 Initialization complete. Launching workers. 00:18:08.879 ======================================================== 00:18:08.879 Latency(us) 00:18:08.879 Device Information : IOPS MiB/s Average min max 00:18:08.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 128.55 0.50 7995.42 254.32 46085.24 00:18:08.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.81 0.21 19248.17 7946.07 51862.25 00:18:08.879 ======================================================== 00:18:08.879 Total : 182.36 0.71 11315.91 254.32 51862.25 00:18:08.879 00:18:08.879 19:48:50 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:08.879 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.259 Initializing NVMe Controllers 00:18:10.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:10.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:10.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:10.259 Initialization complete. Launching workers. 00:18:10.259 ======================================================== 00:18:10.259 Latency(us) 00:18:10.259 Device Information : IOPS MiB/s Average min max 00:18:10.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8173.54 31.93 3914.94 612.82 11104.49 00:18:10.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3825.79 14.94 8388.46 5267.53 16033.43 00:18:10.259 ======================================================== 00:18:10.259 Total : 11999.33 46.87 5341.25 612.82 16033.43 00:18:10.259 00:18:10.259 19:48:51 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:18:10.259 19:48:51 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:18:10.259 19:48:51 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:10.259 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.790 Initializing NVMe Controllers 00:18:12.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:12.790 Controller IO queue size 128, less than required. 00:18:12.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:12.790 Controller IO queue size 128, less than required. 00:18:12.790 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:12.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:12.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:12.790 Initialization complete. Launching workers. 00:18:12.790 ======================================================== 00:18:12.790 Latency(us) 00:18:12.790 Device Information : IOPS MiB/s Average min max 00:18:12.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 780.07 195.02 172485.44 94637.24 232350.62 00:18:12.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.81 144.70 228742.61 85336.71 350070.24 00:18:12.790 ======================================================== 00:18:12.790 Total : 1358.88 339.72 196447.98 85336.71 350070.24 00:18:12.790 00:18:12.790 19:48:54 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:12.790 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.048 No valid NVMe controllers or AIO or URING devices found 00:18:13.048 Initializing NVMe Controllers 00:18:13.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:13.048 Controller IO queue size 128, less than required. 00:18:13.048 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:13.048 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:13.048 Controller IO queue size 128, less than required. 00:18:13.048 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:13.048 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:18:13.048 WARNING: Some requested NVMe devices were skipped 00:18:13.048 19:48:54 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:18:13.048 EAL: No free 2048 kB hugepages reported on node 1 00:18:15.581 Initializing NVMe Controllers 00:18:15.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:15.581 Controller IO queue size 128, less than required. 00:18:15.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:15.581 Controller IO queue size 128, less than required. 00:18:15.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:15.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:15.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:15.581 Initialization complete. Launching workers. 00:18:15.581 00:18:15.581 ==================== 00:18:15.581 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:15.581 TCP transport: 00:18:15.581 polls: 29531 00:18:15.581 idle_polls: 11623 00:18:15.581 sock_completions: 17908 00:18:15.581 nvme_completions: 3725 00:18:15.581 submitted_requests: 5634 00:18:15.581 queued_requests: 1 00:18:15.581 00:18:15.581 ==================== 00:18:15.581 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:15.581 TCP transport: 00:18:15.581 polls: 29535 00:18:15.581 idle_polls: 12504 00:18:15.581 sock_completions: 17031 00:18:15.581 nvme_completions: 3523 00:18:15.581 submitted_requests: 5294 00:18:15.581 queued_requests: 1 00:18:15.581 ======================================================== 00:18:15.581 Latency(us) 00:18:15.581 Device Information : IOPS MiB/s Average min max 00:18:15.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 930.99 232.75 144427.63 81215.75 225340.59 00:18:15.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 880.49 220.12 149357.61 65397.04 232148.97 00:18:15.581 ======================================================== 00:18:15.581 Total : 1811.49 452.87 146823.90 65397.04 232148.97 00:18:15.581 00:18:15.581 19:48:56 -- host/perf.sh@66 -- # sync 00:18:15.581 19:48:56 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.839 19:48:57 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:15.839 19:48:57 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:15.839 19:48:57 -- host/perf.sh@114 -- # nvmftestfini 00:18:15.839 19:48:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:15.839 19:48:57 -- nvmf/common.sh@117 -- # sync 00:18:15.839 19:48:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.839 19:48:57 -- nvmf/common.sh@120 -- # set +e 00:18:15.839 19:48:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.839 19:48:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.839 rmmod nvme_tcp 00:18:15.839 rmmod nvme_fabrics 00:18:15.839 rmmod nvme_keyring 00:18:15.839 19:48:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.839 19:48:57 -- nvmf/common.sh@124 -- # set -e 00:18:15.839 19:48:57 -- nvmf/common.sh@125 -- # return 0 00:18:15.839 19:48:57 -- nvmf/common.sh@478 -- # '[' -n 1736481 ']' 00:18:15.839 19:48:57 -- nvmf/common.sh@479 -- # killprocess 1736481 00:18:15.839 19:48:57 -- common/autotest_common.sh@936 -- # '[' -z 1736481 ']' 00:18:15.839 19:48:57 -- common/autotest_common.sh@940 -- # kill -0 1736481 00:18:15.839 19:48:57 -- common/autotest_common.sh@941 -- # uname 00:18:15.839 19:48:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.839 19:48:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1736481 00:18:16.097 19:48:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:16.097 19:48:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:16.097 19:48:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1736481' 00:18:16.097 killing process with pid 1736481 00:18:16.097 19:48:57 -- common/autotest_common.sh@955 -- # kill 1736481 00:18:16.097 19:48:57 -- common/autotest_common.sh@960 -- # wait 1736481 00:18:17.997 19:48:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:17.997 19:48:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:17.997 19:48:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:17.997 19:48:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.997 19:48:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.997 19:48:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.997 19:48:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.997 19:48:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.900 19:49:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:19.900 00:18:19.900 real 0m21.681s 00:18:19.900 user 1m8.545s 00:18:19.900 sys 0m4.679s 00:18:19.900 19:49:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:19.900 19:49:01 -- common/autotest_common.sh@10 -- # set +x 00:18:19.900 ************************************ 00:18:19.900 END TEST nvmf_perf 00:18:19.900 ************************************ 00:18:19.900 19:49:01 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:19.900 19:49:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:19.900 19:49:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.900 19:49:01 -- common/autotest_common.sh@10 -- # set +x 00:18:19.900 ************************************ 00:18:19.900 START TEST nvmf_fio_host 00:18:19.900 ************************************ 00:18:19.900 19:49:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:19.900 * Looking for test storage... 00:18:19.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:19.900 19:49:01 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.900 19:49:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.900 19:49:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.900 19:49:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.900 19:49:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.900 19:49:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.901 19:49:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.901 19:49:01 -- paths/export.sh@5 -- # export PATH 00:18:19.901 19:49:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.901 19:49:01 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.901 19:49:01 -- nvmf/common.sh@7 -- # uname -s 00:18:19.901 19:49:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.901 19:49:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.901 19:49:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.901 19:49:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.901 19:49:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.901 19:49:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.901 19:49:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.901 19:49:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.901 19:49:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.901 19:49:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.901 19:49:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.901 19:49:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:19.901 19:49:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.901 19:49:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.901 19:49:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:19.901 19:49:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.901 19:49:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.901 19:49:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.901 19:49:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.901 19:49:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.901 19:49:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.901 19:49:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.901 19:49:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.901 19:49:01 -- paths/export.sh@5 -- # export PATH 00:18:19.901 19:49:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.901 19:49:01 -- nvmf/common.sh@47 -- # : 0 00:18:19.901 19:49:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:19.901 19:49:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:19.901 19:49:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.901 19:49:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.901 19:49:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.901 19:49:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:19.901 19:49:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:19.901 19:49:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:19.901 19:49:01 -- host/fio.sh@12 -- # nvmftestinit 00:18:19.901 19:49:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:19.901 19:49:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.901 19:49:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:19.901 19:49:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:19.901 19:49:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:19.901 19:49:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.901 19:49:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.901 19:49:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.901 19:49:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:19.901 19:49:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:19.901 19:49:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:19.901 19:49:01 -- common/autotest_common.sh@10 -- # set +x 00:18:21.804 19:49:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:21.804 19:49:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:21.804 19:49:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:21.804 19:49:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:21.804 19:49:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:21.804 19:49:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:21.804 19:49:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:21.804 19:49:03 -- nvmf/common.sh@295 -- # net_devs=() 00:18:21.804 19:49:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:21.804 19:49:03 -- nvmf/common.sh@296 -- # e810=() 00:18:21.804 19:49:03 -- nvmf/common.sh@296 -- # local -ga e810 00:18:21.804 19:49:03 -- nvmf/common.sh@297 -- # x722=() 00:18:21.804 19:49:03 -- nvmf/common.sh@297 -- # local -ga x722 00:18:21.804 19:49:03 -- nvmf/common.sh@298 -- # mlx=() 00:18:21.804 19:49:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:21.804 19:49:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.804 19:49:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:21.804 19:49:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:21.804 19:49:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:21.804 19:49:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.804 19:49:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:21.804 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:21.804 19:49:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.804 19:49:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:21.804 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:21.804 19:49:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:21.804 19:49:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:21.804 19:49:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.804 19:49:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.804 19:49:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:21.804 19:49:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.804 19:49:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:21.804 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:21.804 19:49:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.804 19:49:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.804 19:49:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.804 19:49:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:21.804 19:49:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.805 19:49:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:21.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:21.805 19:49:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.805 19:49:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:21.805 19:49:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:21.805 19:49:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:21.805 19:49:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:21.805 19:49:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:21.805 19:49:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.805 19:49:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.805 19:49:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.805 19:49:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:21.805 19:49:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.805 19:49:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.805 19:49:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:21.805 19:49:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.805 19:49:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.805 19:49:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:21.805 19:49:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:21.805 19:49:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.805 19:49:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.805 19:49:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.805 19:49:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.064 19:49:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:22.064 19:49:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.064 19:49:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.064 19:49:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.065 19:49:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:18:22.065 00:18:22.065 --- 10.0.0.2 ping statistics --- 00:18:22.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.065 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:18:22.065 19:49:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:18:22.065 00:18:22.065 --- 10.0.0.1 ping statistics --- 00:18:22.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.065 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:22.065 19:49:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.065 19:49:03 -- nvmf/common.sh@411 -- # return 0 00:18:22.065 19:49:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:22.065 19:49:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.065 19:49:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:22.065 19:49:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:22.065 19:49:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.065 19:49:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:22.065 19:49:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:22.065 19:49:03 -- host/fio.sh@14 -- # [[ y != y ]] 00:18:22.065 19:49:03 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:18:22.065 19:49:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:22.065 19:49:03 -- common/autotest_common.sh@10 -- # set +x 00:18:22.065 19:49:03 -- host/fio.sh@22 -- # nvmfpid=1740461 00:18:22.065 19:49:03 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:22.065 19:49:03 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.065 19:49:03 -- host/fio.sh@26 -- # waitforlisten 1740461 00:18:22.065 19:49:03 -- common/autotest_common.sh@817 -- # '[' -z 1740461 ']' 00:18:22.065 19:49:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.065 19:49:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:22.065 19:49:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.065 19:49:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:22.065 19:49:03 -- common/autotest_common.sh@10 -- # set +x 00:18:22.065 [2024-04-24 19:49:03.463140] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:18:22.065 [2024-04-24 19:49:03.463222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.065 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.065 [2024-04-24 19:49:03.533214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:22.324 [2024-04-24 19:49:03.647576] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.324 [2024-04-24 19:49:03.647661] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.324 [2024-04-24 19:49:03.647686] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.324 [2024-04-24 19:49:03.647698] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.324 [2024-04-24 19:49:03.647708] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.324 [2024-04-24 19:49:03.647770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.324 [2024-04-24 19:49:03.647829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.324 [2024-04-24 19:49:03.647906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:22.324 [2024-04-24 19:49:03.647909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.324 19:49:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.324 19:49:03 -- common/autotest_common.sh@850 -- # return 0 00:18:22.324 19:49:03 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:22.324 19:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.324 19:49:03 -- common/autotest_common.sh@10 -- # set +x 00:18:22.324 [2024-04-24 19:49:03.778373] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.324 19:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.324 19:49:03 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:18:22.324 19:49:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:22.324 19:49:03 -- common/autotest_common.sh@10 -- # set +x 00:18:22.324 19:49:03 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:22.324 19:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.324 19:49:03 -- common/autotest_common.sh@10 -- # set +x 00:18:22.324 Malloc1 00:18:22.324 19:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.324 19:49:03 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:22.324 19:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.324 19:49:03 -- common/autotest_common.sh@10 -- # set +x 00:18:22.582 19:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.582 19:49:03 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:22.582 19:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.582 19:49:03 -- common/autotest_common.sh@10 -- # set +x 00:18:22.582 19:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.582 19:49:03 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.582 19:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.582 19:49:03 -- common/autotest_common.sh@10 -- # set +x 00:18:22.582 [2024-04-24 19:49:03.855734] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.582 19:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.582 19:49:03 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:22.582 19:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:22.582 19:49:03 -- common/autotest_common.sh@10 -- # set +x 00:18:22.582 19:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:22.582 19:49:03 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:18:22.582 19:49:03 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:22.582 19:49:03 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:22.582 19:49:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:22.582 19:49:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:22.582 19:49:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:22.582 19:49:03 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:22.582 19:49:03 -- common/autotest_common.sh@1327 -- # shift 00:18:22.582 19:49:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:22.582 19:49:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.582 19:49:03 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:22.582 19:49:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:22.582 19:49:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:22.582 19:49:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:22.582 19:49:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:22.582 19:49:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:22.582 19:49:03 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:22.582 19:49:03 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:18:22.582 19:49:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:22.582 19:49:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:22.582 19:49:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:22.582 19:49:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:22.582 19:49:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:22.582 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:22.582 fio-3.35 00:18:22.582 Starting 1 thread 00:18:22.840 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.368 00:18:25.368 test: (groupid=0, jobs=1): err= 0: pid=1740678: Wed Apr 24 19:49:06 2024 00:18:25.368 read: IOPS=8967, BW=35.0MiB/s (36.7MB/s)(70.3MiB/2007msec) 00:18:25.368 slat (nsec): min=1984, max=112366, avg=2500.36, stdev=1606.10 00:18:25.368 clat (usec): min=2762, max=13296, avg=7865.07, stdev=573.32 00:18:25.368 lat (usec): min=2779, max=13299, avg=7867.57, stdev=573.23 00:18:25.368 clat percentiles (usec): 00:18:25.368 | 1.00th=[ 6587], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7439], 00:18:25.368 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:18:25.368 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:18:25.368 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[12125], 99.95th=[12780], 00:18:25.368 | 99.99th=[13304] 00:18:25.368 bw ( KiB/s): min=34672, max=36648, per=99.98%, avg=35860.00, stdev=839.11, samples=4 00:18:25.368 iops : min= 8668, max= 9162, avg=8965.00, stdev=209.78, samples=4 00:18:25.368 write: IOPS=8989, BW=35.1MiB/s (36.8MB/s)(70.5MiB/2007msec); 0 zone resets 00:18:25.368 slat (nsec): min=2091, max=97932, avg=2624.10, stdev=1448.82 00:18:25.368 clat (usec): min=1605, max=12380, avg=6291.22, stdev=519.12 00:18:25.368 lat (usec): min=1611, max=12383, avg=6293.84, stdev=519.09 00:18:25.368 clat percentiles (usec): 00:18:25.368 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5932], 00:18:25.368 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6390], 00:18:25.368 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6915], 95.00th=[ 7046], 00:18:25.368 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[10683], 99.95th=[11994], 00:18:25.368 | 99.99th=[12387] 00:18:25.368 bw ( KiB/s): min=35496, max=36288, per=100.00%, avg=35962.00, stdev=349.28, samples=4 00:18:25.368 iops : min= 8874, max= 9072, avg=8990.50, stdev=87.32, samples=4 00:18:25.368 lat (msec) : 2=0.01%, 4=0.11%, 10=99.73%, 20=0.15% 00:18:25.368 cpu : usr=51.10%, sys=39.88%, ctx=69, majf=0, minf=5 00:18:25.368 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:25.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:25.368 issued rwts: total=17997,18041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.368 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:25.368 00:18:25.368 Run status group 0 (all jobs): 00:18:25.368 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.3MiB (73.7MB), run=2007-2007msec 00:18:25.368 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.5MiB (73.9MB), run=2007-2007msec 00:18:25.368 19:49:06 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:25.368 19:49:06 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:25.368 19:49:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:25.368 19:49:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.368 19:49:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:25.368 19:49:06 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:25.368 19:49:06 -- common/autotest_common.sh@1327 -- # shift 00:18:25.368 19:49:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:25.368 19:49:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.368 19:49:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:25.368 19:49:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:25.368 19:49:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:25.368 19:49:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:25.368 19:49:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:25.368 19:49:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.368 19:49:06 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:25.368 19:49:06 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:18:25.368 19:49:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:25.368 19:49:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:18:25.368 19:49:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:18:25.368 19:49:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:25.368 19:49:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:25.368 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:25.368 fio-3.35 00:18:25.368 Starting 1 thread 00:18:25.368 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.900 00:18:27.900 test: (groupid=0, jobs=1): err= 0: pid=1741021: Wed Apr 24 19:49:09 2024 00:18:27.900 read: IOPS=8082, BW=126MiB/s (132MB/s)(254MiB/2009msec) 00:18:27.900 slat (nsec): min=2814, max=98518, avg=3546.83, stdev=1502.13 00:18:27.900 clat (usec): min=2656, max=19337, avg=9510.40, stdev=2346.25 00:18:27.900 lat (usec): min=2659, max=19341, avg=9513.95, stdev=2346.32 00:18:27.900 clat percentiles (usec): 00:18:27.900 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7439], 00:18:27.900 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10028], 00:18:27.900 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12649], 95.00th=[13435], 00:18:27.900 | 99.00th=[15401], 99.50th=[16057], 99.90th=[16581], 99.95th=[16909], 00:18:27.900 | 99.99th=[18220] 00:18:27.900 bw ( KiB/s): min=58176, max=74080, per=51.18%, avg=66184.00, stdev=8602.96, samples=4 00:18:27.900 iops : min= 3636, max= 4630, avg=4136.50, stdev=537.68, samples=4 00:18:27.900 write: IOPS=4679, BW=73.1MiB/s (76.7MB/s)(135MiB/1848msec); 0 zone resets 00:18:27.900 slat (usec): min=30, max=222, avg=33.37, stdev= 5.13 00:18:27.900 clat (usec): min=4202, max=19733, avg=11015.32, stdev=1898.43 00:18:27.900 lat (usec): min=4234, max=19765, avg=11048.69, stdev=1898.82 00:18:27.900 clat percentiles (usec): 00:18:27.900 | 1.00th=[ 7570], 5.00th=[ 8225], 10.00th=[ 8717], 20.00th=[ 9372], 00:18:27.900 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10814], 60.00th=[11207], 00:18:27.900 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13698], 95.00th=[14484], 00:18:27.900 | 99.00th=[15926], 99.50th=[16188], 99.90th=[17171], 99.95th=[19268], 00:18:27.900 | 99.99th=[19792] 00:18:27.900 bw ( KiB/s): min=60288, max=77280, per=91.83%, avg=68760.00, stdev=9042.49, samples=4 00:18:27.900 iops : min= 3768, max= 4830, avg=4297.50, stdev=565.16, samples=4 00:18:27.900 lat (msec) : 4=0.21%, 10=49.68%, 20=50.11% 00:18:27.900 cpu : usr=73.66%, sys=22.26%, ctx=23, majf=0, minf=1 00:18:27.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:27.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.900 issued rwts: total=16238,8648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.900 00:18:27.900 Run status group 0 (all jobs): 00:18:27.900 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=254MiB (266MB), run=2009-2009msec 00:18:27.900 WRITE: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=135MiB (142MB), run=1848-1848msec 00:18:27.900 19:49:09 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.900 19:49:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.900 19:49:09 -- common/autotest_common.sh@10 -- # set +x 00:18:27.900 19:49:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.900 19:49:09 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:18:27.900 19:49:09 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:18:27.900 19:49:09 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:18:27.900 19:49:09 -- host/fio.sh@84 -- # nvmftestfini 00:18:27.900 19:49:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:27.900 19:49:09 -- nvmf/common.sh@117 -- # sync 00:18:27.900 19:49:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:27.900 19:49:09 -- nvmf/common.sh@120 -- # set +e 00:18:27.900 19:49:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:27.900 19:49:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:27.900 rmmod nvme_tcp 00:18:27.900 rmmod nvme_fabrics 00:18:27.900 rmmod nvme_keyring 00:18:27.900 19:49:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:27.900 19:49:09 -- nvmf/common.sh@124 -- # set -e 00:18:27.900 19:49:09 -- nvmf/common.sh@125 -- # return 0 00:18:27.900 19:49:09 -- nvmf/common.sh@478 -- # '[' -n 1740461 ']' 00:18:27.900 19:49:09 -- nvmf/common.sh@479 -- # killprocess 1740461 00:18:27.900 19:49:09 -- common/autotest_common.sh@936 -- # '[' -z 1740461 ']' 00:18:27.900 19:49:09 -- common/autotest_common.sh@940 -- # kill -0 1740461 00:18:27.900 19:49:09 -- common/autotest_common.sh@941 -- # uname 00:18:27.900 19:49:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.900 19:49:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1740461 00:18:27.900 19:49:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:27.900 19:49:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:27.900 19:49:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1740461' 00:18:27.900 killing process with pid 1740461 00:18:27.900 19:49:09 -- common/autotest_common.sh@955 -- # kill 1740461 00:18:27.900 19:49:09 -- common/autotest_common.sh@960 -- # wait 1740461 00:18:28.160 19:49:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:28.160 19:49:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:28.160 19:49:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:28.160 19:49:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.160 19:49:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:28.160 19:49:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.160 19:49:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.160 19:49:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.064 19:49:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:30.064 00:18:30.064 real 0m10.365s 00:18:30.064 user 0m27.203s 00:18:30.064 sys 0m3.831s 00:18:30.064 19:49:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:30.064 19:49:11 -- common/autotest_common.sh@10 -- # set +x 00:18:30.064 ************************************ 00:18:30.064 END TEST nvmf_fio_host 00:18:30.064 ************************************ 00:18:30.064 19:49:11 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:30.064 19:49:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:30.064 19:49:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:30.064 19:49:11 -- common/autotest_common.sh@10 -- # set +x 00:18:30.323 ************************************ 00:18:30.323 START TEST nvmf_failover 00:18:30.323 ************************************ 00:18:30.323 19:49:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:30.323 * Looking for test storage... 00:18:30.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:30.323 19:49:11 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.323 19:49:11 -- nvmf/common.sh@7 -- # uname -s 00:18:30.323 19:49:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.323 19:49:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.323 19:49:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.323 19:49:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.323 19:49:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.323 19:49:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.323 19:49:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.323 19:49:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.323 19:49:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.323 19:49:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.323 19:49:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.323 19:49:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.323 19:49:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.323 19:49:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.323 19:49:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.323 19:49:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.323 19:49:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.323 19:49:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.323 19:49:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.323 19:49:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.323 19:49:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.323 19:49:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.323 19:49:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.323 19:49:11 -- paths/export.sh@5 -- # export PATH 00:18:30.324 19:49:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.324 19:49:11 -- nvmf/common.sh@47 -- # : 0 00:18:30.324 19:49:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:30.324 19:49:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:30.324 19:49:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.324 19:49:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.324 19:49:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.324 19:49:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:30.324 19:49:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:30.324 19:49:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:30.324 19:49:11 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:30.324 19:49:11 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:30.324 19:49:11 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:30.324 19:49:11 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.324 19:49:11 -- host/failover.sh@18 -- # nvmftestinit 00:18:30.324 19:49:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:30.324 19:49:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.324 19:49:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:30.324 19:49:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:30.324 19:49:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:30.324 19:49:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.324 19:49:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.324 19:49:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.324 19:49:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:30.324 19:49:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:30.324 19:49:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:30.324 19:49:11 -- common/autotest_common.sh@10 -- # set +x 00:18:32.857 19:49:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:32.857 19:49:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:32.857 19:49:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:32.857 19:49:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:32.857 19:49:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:32.857 19:49:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:32.857 19:49:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:32.857 19:49:13 -- nvmf/common.sh@295 -- # net_devs=() 00:18:32.857 19:49:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:32.857 19:49:13 -- nvmf/common.sh@296 -- # e810=() 00:18:32.857 19:49:13 -- nvmf/common.sh@296 -- # local -ga e810 00:18:32.857 19:49:13 -- nvmf/common.sh@297 -- # x722=() 00:18:32.857 19:49:13 -- nvmf/common.sh@297 -- # local -ga x722 00:18:32.857 19:49:13 -- nvmf/common.sh@298 -- # mlx=() 00:18:32.857 19:49:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:32.857 19:49:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.857 19:49:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:32.857 19:49:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:32.857 19:49:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:32.857 19:49:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.857 19:49:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:32.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:32.857 19:49:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:32.857 19:49:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:32.857 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:32.857 19:49:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:32.857 19:49:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.857 19:49:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.857 19:49:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:32.857 19:49:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.857 19:49:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:32.857 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:32.857 19:49:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.857 19:49:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:32.857 19:49:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.857 19:49:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:32.857 19:49:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.857 19:49:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:32.857 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:32.857 19:49:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.857 19:49:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:32.857 19:49:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:32.857 19:49:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:32.857 19:49:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:32.857 19:49:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.857 19:49:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.857 19:49:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.857 19:49:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:32.857 19:49:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.857 19:49:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.857 19:49:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:32.857 19:49:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.857 19:49:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.857 19:49:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:32.857 19:49:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:32.857 19:49:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.857 19:49:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.857 19:49:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.857 19:49:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.857 19:49:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:32.857 19:49:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.857 19:49:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.857 19:49:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.857 19:49:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:32.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:18:32.857 00:18:32.857 --- 10.0.0.2 ping statistics --- 00:18:32.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.857 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:18:32.857 19:49:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:18:32.857 00:18:32.857 --- 10.0.0.1 ping statistics --- 00:18:32.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.857 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:32.858 19:49:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.858 19:49:13 -- nvmf/common.sh@411 -- # return 0 00:18:32.858 19:49:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:32.858 19:49:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.858 19:49:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:32.858 19:49:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:32.858 19:49:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.858 19:49:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:32.858 19:49:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:32.858 19:49:13 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:32.858 19:49:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:32.858 19:49:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:32.858 19:49:13 -- common/autotest_common.sh@10 -- # set +x 00:18:32.858 19:49:14 -- nvmf/common.sh@470 -- # nvmfpid=1743334 00:18:32.858 19:49:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:32.858 19:49:14 -- nvmf/common.sh@471 -- # waitforlisten 1743334 00:18:32.858 19:49:14 -- common/autotest_common.sh@817 -- # '[' -z 1743334 ']' 00:18:32.858 19:49:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.858 19:49:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:32.858 19:49:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.858 19:49:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:32.858 19:49:14 -- common/autotest_common.sh@10 -- # set +x 00:18:32.858 [2024-04-24 19:49:14.049014] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:18:32.858 [2024-04-24 19:49:14.049110] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.858 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.858 [2024-04-24 19:49:14.120817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:32.858 [2024-04-24 19:49:14.236092] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.858 [2024-04-24 19:49:14.236162] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.858 [2024-04-24 19:49:14.236180] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.858 [2024-04-24 19:49:14.236193] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.858 [2024-04-24 19:49:14.236206] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.858 [2024-04-24 19:49:14.236298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.858 [2024-04-24 19:49:14.236419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.858 [2024-04-24 19:49:14.236422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.793 19:49:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:33.793 19:49:14 -- common/autotest_common.sh@850 -- # return 0 00:18:33.794 19:49:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:33.794 19:49:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:33.794 19:49:14 -- common/autotest_common.sh@10 -- # set +x 00:18:33.794 19:49:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.794 19:49:15 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:33.794 [2024-04-24 19:49:15.220189] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.794 19:49:15 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:34.051 Malloc0 00:18:34.051 19:49:15 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:34.309 19:49:15 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:34.568 19:49:15 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.827 [2024-04-24 19:49:16.229672] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.827 19:49:16 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:35.086 [2024-04-24 19:49:16.470355] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:35.086 19:49:16 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:35.344 [2024-04-24 19:49:16.759279] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:35.344 19:49:16 -- host/failover.sh@31 -- # bdevperf_pid=1743634 00:18:35.344 19:49:16 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:35.344 19:49:16 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.344 19:49:16 -- host/failover.sh@34 -- # waitforlisten 1743634 /var/tmp/bdevperf.sock 00:18:35.344 19:49:16 -- common/autotest_common.sh@817 -- # '[' -z 1743634 ']' 00:18:35.344 19:49:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.344 19:49:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:35.344 19:49:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.344 19:49:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:35.344 19:49:16 -- common/autotest_common.sh@10 -- # set +x 00:18:35.602 19:49:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:35.602 19:49:17 -- common/autotest_common.sh@850 -- # return 0 00:18:35.602 19:49:17 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:36.166 NVMe0n1 00:18:36.166 19:49:17 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:36.730 00:18:36.730 19:49:18 -- host/failover.sh@39 -- # run_test_pid=1743770 00:18:36.730 19:49:18 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:36.730 19:49:18 -- host/failover.sh@41 -- # sleep 1 00:18:37.665 19:49:19 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.922 [2024-04-24 19:49:19.244088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.922 [2024-04-24 19:49:19.244455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 [2024-04-24 19:49:19.244467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 [2024-04-24 19:49:19.244479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 [2024-04-24 19:49:19.244492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 [2024-04-24 19:49:19.244503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 [2024-04-24 19:49:19.244515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 [2024-04-24 19:49:19.244527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 [2024-04-24 19:49:19.244539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 [2024-04-24 19:49:19.244551] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 [2024-04-24 19:49:19.244563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e40 is same with the state(5) to be set 00:18:37.923 19:49:19 -- host/failover.sh@45 -- # sleep 3 00:18:41.204 19:49:22 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:41.462 00:18:41.462 19:49:22 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:41.723 [2024-04-24 19:49:22.986591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.723 [2024-04-24 19:49:22.986985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.986997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 [2024-04-24 19:49:22.987471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127540 is same with the state(5) to be set 00:18:41.724 19:49:23 -- host/failover.sh@50 -- # sleep 3 00:18:45.026 19:49:26 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.026 [2024-04-24 19:49:26.243053] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.026 19:49:26 -- host/failover.sh@55 -- # sleep 1 00:18:45.965 19:49:27 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:46.224 [2024-04-24 19:49:27.504742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.504997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.505009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.505021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.505034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 [2024-04-24 19:49:27.505046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1127c20 is same with the state(5) to be set 00:18:46.224 19:49:27 -- host/failover.sh@59 -- # wait 1743770 00:18:52.809 0 00:18:52.809 19:49:33 -- host/failover.sh@61 -- # killprocess 1743634 00:18:52.809 19:49:33 -- common/autotest_common.sh@936 -- # '[' -z 1743634 ']' 00:18:52.809 19:49:33 -- common/autotest_common.sh@940 -- # kill -0 1743634 00:18:52.809 19:49:33 -- common/autotest_common.sh@941 -- # uname 00:18:52.809 19:49:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:52.809 19:49:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1743634 00:18:52.809 19:49:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:52.809 19:49:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:52.809 19:49:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1743634' 00:18:52.809 killing process with pid 1743634 00:18:52.809 19:49:33 -- common/autotest_common.sh@955 -- # kill 1743634 00:18:52.809 19:49:33 -- common/autotest_common.sh@960 -- # wait 1743634 00:18:52.809 19:49:33 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:52.809 [2024-04-24 19:49:16.821583] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:18:52.809 [2024-04-24 19:49:16.821682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743634 ] 00:18:52.809 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.809 [2024-04-24 19:49:16.880885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.809 [2024-04-24 19:49:16.988757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.809 Running I/O for 15 seconds... 00:18:52.809 [2024-04-24 19:49:19.245156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.245981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.245995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.246008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.246021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.246034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.246049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.246062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.246076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.246088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.809 [2024-04-24 19:49:19.246102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.809 [2024-04-24 19:49:19.246115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.246979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.246993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.247009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.247024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.247036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.247051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.247063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.247077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.247090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.247105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.810 [2024-04-24 19:49:19.247118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.247133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.810 [2024-04-24 19:49:19.247146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.247160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.810 [2024-04-24 19:49:19.247173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.247187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.810 [2024-04-24 19:49:19.247200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.247214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.810 [2024-04-24 19:49:19.247227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.810 [2024-04-24 19:49:19.247256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.810 [2024-04-24 19:49:19.247269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.247980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.247993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.811 [2024-04-24 19:49:19.248374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.811 [2024-04-24 19:49:19.248387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:19.248415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:19.248443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:19.248475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:19.248503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77456 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.248566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.248595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.248618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.248650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77472 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.248674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.248697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.248721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.248744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.248767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.248790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.248813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.248836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.248863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.248888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.248911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.248934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.248957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.248970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.248980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.248991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.249003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.249016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.249026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.249037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.249049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.249061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.249072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.249082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.249094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.249106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.249117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.249128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.249140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.249152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.812 [2024-04-24 19:49:19.249168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.812 [2024-04-24 19:49:19.249180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:18:52.812 [2024-04-24 19:49:19.249193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.249252] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x689e70 was disconnected and freed. reset controller. 00:18:52.812 [2024-04-24 19:49:19.249273] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:52.812 [2024-04-24 19:49:19.249306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.812 [2024-04-24 19:49:19.249323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.249338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.812 [2024-04-24 19:49:19.249350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.249363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.812 [2024-04-24 19:49:19.249376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.249390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.812 [2024-04-24 19:49:19.249402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:19.249415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.812 [2024-04-24 19:49:19.249474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66b3f0 (9): Bad file descriptor 00:18:52.812 [2024-04-24 19:49:19.252684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.812 [2024-04-24 19:49:19.285685] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:52.812 [2024-04-24 19:49:22.988747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:22.988789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:22.988819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:22.988835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:22.988852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:22.988866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:22.988881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:22.988895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:22.988910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:22.988924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.812 [2024-04-24 19:49:22.988939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.812 [2024-04-24 19:49:22.988952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.988967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.988986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.813 [2024-04-24 19:49:22.989074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.813 [2024-04-24 19:49:22.989102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.813 [2024-04-24 19:49:22.989131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.813 [2024-04-24 19:49:22.989158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.813 [2024-04-24 19:49:22.989186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.813 [2024-04-24 19:49:22.989214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.813 [2024-04-24 19:49:22.989242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.813 [2024-04-24 19:49:22.989852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.813 [2024-04-24 19:49:22.989866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.989879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.989894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.989907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.989921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.989934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.989949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.989965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.989979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.989992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.814 [2024-04-24 19:49:22.990415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.814 [2024-04-24 19:49:22.990452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.814 [2024-04-24 19:49:22.990481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.814 [2024-04-24 19:49:22.990508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.814 [2024-04-24 19:49:22.990536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.814 [2024-04-24 19:49:22.990564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.814 [2024-04-24 19:49:22.990592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.814 [2024-04-24 19:49:22.990620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.990982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.814 [2024-04-24 19:49:22.990996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.814 [2024-04-24 19:49:22.991010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.815 [2024-04-24 19:49:22.991433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84088 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84096 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84104 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84112 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84120 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84128 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84136 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84144 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84152 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84160 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.991965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.991975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.991986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84168 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.991998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.992011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.815 [2024-04-24 19:49:22.992021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.815 [2024-04-24 19:49:22.992032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84176 len:8 PRP1 0x0 PRP2 0x0 00:18:52.815 [2024-04-24 19:49:22.992044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.815 [2024-04-24 19:49:22.992057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83344 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83352 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83360 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83368 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83376 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83384 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83392 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83400 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84184 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84192 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84200 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84208 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84216 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84224 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84232 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84240 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83408 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83416 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83424 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.992962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.992975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.992985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.992996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83432 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.993008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.816 [2024-04-24 19:49:22.993020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.816 [2024-04-24 19:49:22.993031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.816 [2024-04-24 19:49:22.993042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83440 len:8 PRP1 0x0 PRP2 0x0 00:18:52.816 [2024-04-24 19:49:22.993054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:22.993071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.817 [2024-04-24 19:49:22.993082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.817 [2024-04-24 19:49:22.993093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83448 len:8 PRP1 0x0 PRP2 0x0 00:18:52.817 [2024-04-24 19:49:22.993105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:22.993117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.817 [2024-04-24 19:49:22.993128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.817 [2024-04-24 19:49:22.993139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83456 len:8 PRP1 0x0 PRP2 0x0 00:18:52.817 [2024-04-24 19:49:22.993151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:22.993163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.817 [2024-04-24 19:49:22.993174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.817 [2024-04-24 19:49:22.993185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83464 len:8 PRP1 0x0 PRP2 0x0 00:18:52.817 [2024-04-24 19:49:22.993197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:22.993264] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6778e0 was disconnected and freed. reset controller. 00:18:52.817 [2024-04-24 19:49:22.993282] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:52.817 [2024-04-24 19:49:22.993324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.817 [2024-04-24 19:49:22.993341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:22.993356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.817 [2024-04-24 19:49:22.993369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:22.993382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.817 [2024-04-24 19:49:22.993394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:22.993407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.817 [2024-04-24 19:49:22.993420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:22.993433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.817 [2024-04-24 19:49:22.993488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66b3f0 (9): Bad file descriptor 00:18:52.817 [2024-04-24 19:49:22.996717] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.817 [2024-04-24 19:49:23.161201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:52.817 [2024-04-24 19:49:27.505215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.817 [2024-04-24 19:49:27.505552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.817 [2024-04-24 19:49:27.505580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.817 [2024-04-24 19:49:27.505623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.817 [2024-04-24 19:49:27.505663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.817 [2024-04-24 19:49:27.505692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.817 [2024-04-24 19:49:27.505726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.817 [2024-04-24 19:49:27.505755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.817 [2024-04-24 19:49:27.505785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.505981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.505996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.506009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.506024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.506037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.506051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.506064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.506078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.817 [2024-04-24 19:49:27.506095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.817 [2024-04-24 19:49:27.506111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.818 [2024-04-24 19:49:27.506572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.818 [2024-04-24 19:49:27.506599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.818 [2024-04-24 19:49:27.506651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.818 [2024-04-24 19:49:27.506680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.818 [2024-04-24 19:49:27.506709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.818 [2024-04-24 19:49:27.506738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.818 [2024-04-24 19:49:27.506767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.818 [2024-04-24 19:49:27.506795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.818 [2024-04-24 19:49:27.506810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.506828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.506844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.506857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.506872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.506886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.506901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.506929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.506944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.506957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.506971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.506984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.506999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.507012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.507039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.507733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.507762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.507790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.819 [2024-04-24 19:49:27.507819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.507979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.819 [2024-04-24 19:49:27.507995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.819 [2024-04-24 19:49:27.508009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.820 [2024-04-24 19:49:27.508794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.820 [2024-04-24 19:49:27.508822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.820 [2024-04-24 19:49:27.508849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:52.820 [2024-04-24 19:49:27.508878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.820 [2024-04-24 19:49:27.508984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.820 [2024-04-24 19:49:27.508998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.821 [2024-04-24 19:49:27.509013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.821 [2024-04-24 19:49:27.509026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.821 [2024-04-24 19:49:27.509041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.821 [2024-04-24 19:49:27.509054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.821 [2024-04-24 19:49:27.509069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.821 [2024-04-24 19:49:27.509083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.821 [2024-04-24 19:49:27.509097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8349a0 is same with the state(5) to be set 00:18:52.821 [2024-04-24 19:49:27.509118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:52.821 [2024-04-24 19:49:27.509130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:52.821 [2024-04-24 19:49:27.509142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47120 len:8 PRP1 0x0 PRP2 0x0 00:18:52.821 [2024-04-24 19:49:27.509154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.821 [2024-04-24 19:49:27.509215] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8349a0 was disconnected and freed. reset controller. 00:18:52.821 [2024-04-24 19:49:27.509234] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:52.821 [2024-04-24 19:49:27.509266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.821 [2024-04-24 19:49:27.509284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.821 [2024-04-24 19:49:27.509299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.821 [2024-04-24 19:49:27.509313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.821 [2024-04-24 19:49:27.509333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.821 [2024-04-24 19:49:27.509347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.821 [2024-04-24 19:49:27.509360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:52.821 [2024-04-24 19:49:27.509373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:52.821 [2024-04-24 19:49:27.509386] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:52.821 [2024-04-24 19:49:27.509424] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66b3f0 (9): Bad file descriptor 00:18:52.821 [2024-04-24 19:49:27.512653] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:52.821 [2024-04-24 19:49:27.704029] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:52.821 00:18:52.821 Latency(us) 00:18:52.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.821 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:52.821 Verification LBA range: start 0x0 length 0x4000 00:18:52.821 NVMe0n1 : 15.01 8558.05 33.43 1023.66 0.00 13330.40 1080.13 16311.18 00:18:52.821 =================================================================================================================== 00:18:52.821 Total : 8558.05 33.43 1023.66 0.00 13330.40 1080.13 16311.18 00:18:52.821 Received shutdown signal, test time was about 15.000000 seconds 00:18:52.821 00:18:52.821 Latency(us) 00:18:52.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.821 =================================================================================================================== 00:18:52.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.821 19:49:33 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:52.821 19:49:33 -- host/failover.sh@65 -- # count=3 00:18:52.821 19:49:33 -- host/failover.sh@67 -- # (( count != 3 )) 00:18:52.821 19:49:33 -- host/failover.sh@73 -- # bdevperf_pid=1745616 00:18:52.821 19:49:33 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:52.821 19:49:33 -- host/failover.sh@75 -- # waitforlisten 1745616 /var/tmp/bdevperf.sock 00:18:52.821 19:49:33 -- common/autotest_common.sh@817 -- # '[' -z 1745616 ']' 00:18:52.821 19:49:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.821 19:49:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:52.821 19:49:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.821 19:49:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:52.821 19:49:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.821 19:49:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:52.821 19:49:33 -- common/autotest_common.sh@850 -- # return 0 00:18:52.821 19:49:33 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:52.821 [2024-04-24 19:49:34.036708] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:52.821 19:49:34 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:52.821 [2024-04-24 19:49:34.293442] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:52.821 19:49:34 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:53.388 NVMe0n1 00:18:53.388 19:49:34 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:53.647 00:18:53.647 19:49:35 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:54.215 00:18:54.215 19:49:35 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:54.215 19:49:35 -- host/failover.sh@82 -- # grep -q NVMe0 00:18:54.215 19:49:35 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:54.474 19:49:35 -- host/failover.sh@87 -- # sleep 3 00:18:57.762 19:49:38 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.763 19:49:38 -- host/failover.sh@88 -- # grep -q NVMe0 00:18:57.763 19:49:39 -- host/failover.sh@90 -- # run_test_pid=1746287 00:18:57.763 19:49:39 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.763 19:49:39 -- host/failover.sh@92 -- # wait 1746287 00:18:59.139 0 00:18:59.139 19:49:40 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:59.139 [2024-04-24 19:49:33.529217] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:18:59.139 [2024-04-24 19:49:33.529305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745616 ] 00:18:59.139 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.139 [2024-04-24 19:49:33.587712] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.139 [2024-04-24 19:49:33.693712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.139 [2024-04-24 19:49:35.915601] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:59.139 [2024-04-24 19:49:35.915688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.139 [2024-04-24 19:49:35.915710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.139 [2024-04-24 19:49:35.915727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.139 [2024-04-24 19:49:35.915742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.139 [2024-04-24 19:49:35.915756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.139 [2024-04-24 19:49:35.915769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.139 [2024-04-24 19:49:35.915783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:59.139 [2024-04-24 19:49:35.915797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:59.139 [2024-04-24 19:49:35.915811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.139 [2024-04-24 19:49:35.915855] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:59.139 [2024-04-24 19:49:35.915886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad33f0 (9): Bad file descriptor 00:18:59.139 [2024-04-24 19:49:35.962471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:59.139 Running I/O for 1 seconds... 00:18:59.139 00:18:59.139 Latency(us) 00:18:59.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.139 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:59.139 Verification LBA range: start 0x0 length 0x4000 00:18:59.139 NVMe0n1 : 1.01 8494.45 33.18 0.00 0.00 15010.69 2390.85 13398.47 00:18:59.139 =================================================================================================================== 00:18:59.139 Total : 8494.45 33.18 0.00 0.00 15010.69 2390.85 13398.47 00:18:59.139 19:49:40 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:59.139 19:49:40 -- host/failover.sh@95 -- # grep -q NVMe0 00:18:59.139 19:49:40 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:59.398 19:49:40 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:59.398 19:49:40 -- host/failover.sh@99 -- # grep -q NVMe0 00:18:59.656 19:49:41 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:59.913 19:49:41 -- host/failover.sh@101 -- # sleep 3 00:19:03.209 19:49:44 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.209 19:49:44 -- host/failover.sh@103 -- # grep -q NVMe0 00:19:03.209 19:49:44 -- host/failover.sh@108 -- # killprocess 1745616 00:19:03.209 19:49:44 -- common/autotest_common.sh@936 -- # '[' -z 1745616 ']' 00:19:03.209 19:49:44 -- common/autotest_common.sh@940 -- # kill -0 1745616 00:19:03.209 19:49:44 -- common/autotest_common.sh@941 -- # uname 00:19:03.209 19:49:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:03.209 19:49:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1745616 00:19:03.209 19:49:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:03.209 19:49:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:03.209 19:49:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1745616' 00:19:03.209 killing process with pid 1745616 00:19:03.209 19:49:44 -- common/autotest_common.sh@955 -- # kill 1745616 00:19:03.209 19:49:44 -- common/autotest_common.sh@960 -- # wait 1745616 00:19:03.468 19:49:44 -- host/failover.sh@110 -- # sync 00:19:03.468 19:49:44 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.728 19:49:45 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:03.728 19:49:45 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:03.728 19:49:45 -- host/failover.sh@116 -- # nvmftestfini 00:19:03.728 19:49:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:03.728 19:49:45 -- nvmf/common.sh@117 -- # sync 00:19:03.728 19:49:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.728 19:49:45 -- nvmf/common.sh@120 -- # set +e 00:19:03.728 19:49:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.728 19:49:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.728 rmmod nvme_tcp 00:19:03.728 rmmod nvme_fabrics 00:19:03.728 rmmod nvme_keyring 00:19:03.728 19:49:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.728 19:49:45 -- nvmf/common.sh@124 -- # set -e 00:19:03.728 19:49:45 -- nvmf/common.sh@125 -- # return 0 00:19:03.728 19:49:45 -- nvmf/common.sh@478 -- # '[' -n 1743334 ']' 00:19:03.728 19:49:45 -- nvmf/common.sh@479 -- # killprocess 1743334 00:19:03.728 19:49:45 -- common/autotest_common.sh@936 -- # '[' -z 1743334 ']' 00:19:03.728 19:49:45 -- common/autotest_common.sh@940 -- # kill -0 1743334 00:19:03.728 19:49:45 -- common/autotest_common.sh@941 -- # uname 00:19:03.728 19:49:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:03.728 19:49:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1743334 00:19:03.728 19:49:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:03.728 19:49:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:03.728 19:49:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1743334' 00:19:03.728 killing process with pid 1743334 00:19:03.728 19:49:45 -- common/autotest_common.sh@955 -- # kill 1743334 00:19:03.728 19:49:45 -- common/autotest_common.sh@960 -- # wait 1743334 00:19:04.014 19:49:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:04.014 19:49:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:04.014 19:49:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:04.014 19:49:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.014 19:49:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.014 19:49:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.014 19:49:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.014 19:49:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.609 19:49:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.609 00:19:06.609 real 0m35.857s 00:19:06.609 user 2m3.264s 00:19:06.609 sys 0m6.721s 00:19:06.609 19:49:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:06.609 19:49:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.609 ************************************ 00:19:06.609 END TEST nvmf_failover 00:19:06.609 ************************************ 00:19:06.609 19:49:47 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:06.609 19:49:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:06.609 19:49:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:06.610 19:49:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.610 ************************************ 00:19:06.610 START TEST nvmf_discovery 00:19:06.610 ************************************ 00:19:06.610 19:49:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:06.610 * Looking for test storage... 00:19:06.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:06.610 19:49:47 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.610 19:49:47 -- nvmf/common.sh@7 -- # uname -s 00:19:06.610 19:49:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.610 19:49:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.610 19:49:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.610 19:49:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.610 19:49:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.610 19:49:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.610 19:49:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.610 19:49:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.610 19:49:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.610 19:49:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.610 19:49:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.610 19:49:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.610 19:49:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.610 19:49:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.610 19:49:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.610 19:49:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.610 19:49:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.610 19:49:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.610 19:49:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.610 19:49:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.610 19:49:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.610 19:49:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.610 19:49:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.610 19:49:47 -- paths/export.sh@5 -- # export PATH 00:19:06.610 19:49:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.610 19:49:47 -- nvmf/common.sh@47 -- # : 0 00:19:06.610 19:49:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.610 19:49:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.610 19:49:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.610 19:49:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.610 19:49:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.610 19:49:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.610 19:49:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.610 19:49:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.610 19:49:47 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:06.610 19:49:47 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:06.610 19:49:47 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:06.610 19:49:47 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:06.610 19:49:47 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:06.610 19:49:47 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:06.610 19:49:47 -- host/discovery.sh@25 -- # nvmftestinit 00:19:06.610 19:49:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:06.610 19:49:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.610 19:49:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:06.610 19:49:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:06.610 19:49:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:06.610 19:49:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.610 19:49:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.610 19:49:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.610 19:49:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:06.610 19:49:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:06.610 19:49:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.610 19:49:47 -- common/autotest_common.sh@10 -- # set +x 00:19:08.511 19:49:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:08.511 19:49:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:08.511 19:49:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:08.511 19:49:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:08.511 19:49:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:08.511 19:49:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:08.511 19:49:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:08.511 19:49:49 -- nvmf/common.sh@295 -- # net_devs=() 00:19:08.511 19:49:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:08.511 19:49:49 -- nvmf/common.sh@296 -- # e810=() 00:19:08.511 19:49:49 -- nvmf/common.sh@296 -- # local -ga e810 00:19:08.511 19:49:49 -- nvmf/common.sh@297 -- # x722=() 00:19:08.511 19:49:49 -- nvmf/common.sh@297 -- # local -ga x722 00:19:08.511 19:49:49 -- nvmf/common.sh@298 -- # mlx=() 00:19:08.511 19:49:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:08.511 19:49:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.511 19:49:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:08.511 19:49:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:08.511 19:49:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:08.511 19:49:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.511 19:49:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:08.511 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:08.511 19:49:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.511 19:49:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:08.511 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:08.511 19:49:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:08.511 19:49:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:08.511 19:49:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.511 19:49:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.511 19:49:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:08.511 19:49:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.511 19:49:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:08.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:08.511 19:49:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.512 19:49:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.512 19:49:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.512 19:49:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:08.512 19:49:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.512 19:49:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:08.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:08.512 19:49:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.512 19:49:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:08.512 19:49:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:08.512 19:49:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:08.512 19:49:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:08.512 19:49:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:08.512 19:49:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.512 19:49:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.512 19:49:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.512 19:49:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:08.512 19:49:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.512 19:49:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.512 19:49:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:08.512 19:49:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.512 19:49:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.512 19:49:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:08.512 19:49:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:08.512 19:49:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.512 19:49:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.512 19:49:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.512 19:49:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.512 19:49:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:08.512 19:49:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.512 19:49:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.512 19:49:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.512 19:49:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:08.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:19:08.512 00:19:08.512 --- 10.0.0.2 ping statistics --- 00:19:08.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.512 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:19:08.512 19:49:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:19:08.512 00:19:08.512 --- 10.0.0.1 ping statistics --- 00:19:08.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.512 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:08.512 19:49:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.512 19:49:49 -- nvmf/common.sh@411 -- # return 0 00:19:08.512 19:49:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:08.512 19:49:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.512 19:49:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:08.512 19:49:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:08.512 19:49:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.512 19:49:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:08.512 19:49:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:08.512 19:49:49 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:08.512 19:49:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:08.512 19:49:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:08.512 19:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.512 19:49:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.512 19:49:49 -- nvmf/common.sh@470 -- # nvmfpid=1748896 00:19:08.512 19:49:49 -- nvmf/common.sh@471 -- # waitforlisten 1748896 00:19:08.512 19:49:49 -- common/autotest_common.sh@817 -- # '[' -z 1748896 ']' 00:19:08.512 19:49:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.512 19:49:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:08.512 19:49:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.512 19:49:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:08.512 19:49:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.512 [2024-04-24 19:49:49.917518] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:19:08.512 [2024-04-24 19:49:49.917580] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.512 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.512 [2024-04-24 19:49:49.986152] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.771 [2024-04-24 19:49:50.117825] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.771 [2024-04-24 19:49:50.117891] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.771 [2024-04-24 19:49:50.117920] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.771 [2024-04-24 19:49:50.117931] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.771 [2024-04-24 19:49:50.117940] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.771 [2024-04-24 19:49:50.117979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.709 19:49:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:09.709 19:49:50 -- common/autotest_common.sh@850 -- # return 0 00:19:09.709 19:49:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:09.709 19:49:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:09.709 19:49:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.709 19:49:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.709 19:49:50 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.709 19:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.709 19:49:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.709 [2024-04-24 19:49:50.919711] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.709 19:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.709 19:49:50 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:09.709 19:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.709 19:49:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.709 [2024-04-24 19:49:50.927880] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:09.709 19:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.709 19:49:50 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:09.709 19:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.709 19:49:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.709 null0 00:19:09.709 19:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.709 19:49:50 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:09.709 19:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.709 19:49:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.709 null1 00:19:09.709 19:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.709 19:49:50 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:09.709 19:49:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.709 19:49:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.709 19:49:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.709 19:49:50 -- host/discovery.sh@45 -- # hostpid=1749054 00:19:09.709 19:49:50 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:09.709 19:49:50 -- host/discovery.sh@46 -- # waitforlisten 1749054 /tmp/host.sock 00:19:09.709 19:49:50 -- common/autotest_common.sh@817 -- # '[' -z 1749054 ']' 00:19:09.709 19:49:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:19:09.709 19:49:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:09.709 19:49:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:09.709 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:09.709 19:49:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:09.709 19:49:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.709 [2024-04-24 19:49:50.998441] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:19:09.709 [2024-04-24 19:49:50.998506] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749054 ] 00:19:09.709 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.709 [2024-04-24 19:49:51.058883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.709 [2024-04-24 19:49:51.184965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.651 19:49:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:10.651 19:49:51 -- common/autotest_common.sh@850 -- # return 0 00:19:10.651 19:49:51 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:10.651 19:49:51 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:10.651 19:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.651 19:49:51 -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 19:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.651 19:49:51 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:10.651 19:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.651 19:49:51 -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 19:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.651 19:49:51 -- host/discovery.sh@72 -- # notify_id=0 00:19:10.651 19:49:51 -- host/discovery.sh@83 -- # get_subsystem_names 00:19:10.651 19:49:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.651 19:49:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:10.651 19:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.651 19:49:51 -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 19:49:51 -- host/discovery.sh@59 -- # sort 00:19:10.651 19:49:51 -- host/discovery.sh@59 -- # xargs 00:19:10.651 19:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.651 19:49:51 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:10.651 19:49:51 -- host/discovery.sh@84 -- # get_bdev_list 00:19:10.651 19:49:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.651 19:49:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:10.651 19:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.651 19:49:51 -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 19:49:51 -- host/discovery.sh@55 -- # sort 00:19:10.651 19:49:51 -- host/discovery.sh@55 -- # xargs 00:19:10.651 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.651 19:49:52 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:10.651 19:49:52 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:10.651 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.651 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.651 19:49:52 -- host/discovery.sh@87 -- # get_subsystem_names 00:19:10.651 19:49:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.651 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.651 19:49:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:10.651 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 19:49:52 -- host/discovery.sh@59 -- # sort 00:19:10.651 19:49:52 -- host/discovery.sh@59 -- # xargs 00:19:10.651 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.651 19:49:52 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:10.651 19:49:52 -- host/discovery.sh@88 -- # get_bdev_list 00:19:10.651 19:49:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.651 19:49:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:10.651 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.651 19:49:52 -- host/discovery.sh@55 -- # sort 00:19:10.651 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 19:49:52 -- host/discovery.sh@55 -- # xargs 00:19:10.651 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.651 19:49:52 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:10.651 19:49:52 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:10.651 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.651 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.651 19:49:52 -- host/discovery.sh@91 -- # get_subsystem_names 00:19:10.651 19:49:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.651 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.651 19:49:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:10.651 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.651 19:49:52 -- host/discovery.sh@59 -- # sort 00:19:10.651 19:49:52 -- host/discovery.sh@59 -- # xargs 00:19:10.651 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.910 19:49:52 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:10.910 19:49:52 -- host/discovery.sh@92 -- # get_bdev_list 00:19:10.910 19:49:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.910 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.910 19:49:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:10.910 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.910 19:49:52 -- host/discovery.sh@55 -- # sort 00:19:10.910 19:49:52 -- host/discovery.sh@55 -- # xargs 00:19:10.910 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.910 19:49:52 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:10.910 19:49:52 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:10.910 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.910 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.910 [2024-04-24 19:49:52.227419] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.910 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.910 19:49:52 -- host/discovery.sh@97 -- # get_subsystem_names 00:19:10.910 19:49:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.910 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.910 19:49:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:10.910 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.910 19:49:52 -- host/discovery.sh@59 -- # sort 00:19:10.910 19:49:52 -- host/discovery.sh@59 -- # xargs 00:19:10.910 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.910 19:49:52 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:10.910 19:49:52 -- host/discovery.sh@98 -- # get_bdev_list 00:19:10.910 19:49:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:10.910 19:49:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:10.910 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.910 19:49:52 -- host/discovery.sh@55 -- # sort 00:19:10.910 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.910 19:49:52 -- host/discovery.sh@55 -- # xargs 00:19:10.910 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.910 19:49:52 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:10.910 19:49:52 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:10.910 19:49:52 -- host/discovery.sh@79 -- # expected_count=0 00:19:10.910 19:49:52 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:10.910 19:49:52 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:10.910 19:49:52 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.910 19:49:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.910 19:49:52 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:10.910 19:49:52 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:10.910 19:49:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:10.910 19:49:52 -- host/discovery.sh@74 -- # jq '. | length' 00:19:10.910 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.910 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.910 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.910 19:49:52 -- host/discovery.sh@74 -- # notification_count=0 00:19:10.910 19:49:52 -- host/discovery.sh@75 -- # notify_id=0 00:19:10.910 19:49:52 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:10.910 19:49:52 -- common/autotest_common.sh@904 -- # return 0 00:19:10.910 19:49:52 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:10.910 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.910 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.910 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.910 19:49:52 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:10.910 19:49:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:10.910 19:49:52 -- common/autotest_common.sh@901 -- # local max=10 00:19:10.910 19:49:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:10.910 19:49:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:10.910 19:49:52 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:10.910 19:49:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:10.910 19:49:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.910 19:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.911 19:49:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:10.911 19:49:52 -- host/discovery.sh@59 -- # sort 00:19:10.911 19:49:52 -- host/discovery.sh@59 -- # xargs 00:19:10.911 19:49:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.911 19:49:52 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:19:10.911 19:49:52 -- common/autotest_common.sh@906 -- # sleep 1 00:19:11.477 [2024-04-24 19:49:52.960172] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:11.477 [2024-04-24 19:49:52.960201] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:11.477 [2024-04-24 19:49:52.960228] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:11.736 [2024-04-24 19:49:53.046502] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:11.736 [2024-04-24 19:49:53.149569] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:11.736 [2024-04-24 19:49:53.149596] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:12.006 19:49:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.006 19:49:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:12.006 19:49:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:12.006 19:49:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.006 19:49:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.006 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.006 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.006 19:49:53 -- host/discovery.sh@59 -- # sort 00:19:12.006 19:49:53 -- host/discovery.sh@59 -- # xargs 00:19:12.006 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.006 19:49:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.006 19:49:53 -- common/autotest_common.sh@904 -- # return 0 00:19:12.006 19:49:53 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:12.006 19:49:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:12.006 19:49:53 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.006 19:49:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.006 19:49:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:12.006 19:49:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:12.006 19:49:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.006 19:49:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.006 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.006 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.006 19:49:53 -- host/discovery.sh@55 -- # sort 00:19:12.006 19:49:53 -- host/discovery.sh@55 -- # xargs 00:19:12.006 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.006 19:49:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:12.006 19:49:53 -- common/autotest_common.sh@904 -- # return 0 00:19:12.006 19:49:53 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:12.006 19:49:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:12.006 19:49:53 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.006 19:49:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.006 19:49:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:12.006 19:49:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:12.006 19:49:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:12.006 19:49:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:12.006 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.006 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.006 19:49:53 -- host/discovery.sh@63 -- # sort -n 00:19:12.006 19:49:53 -- host/discovery.sh@63 -- # xargs 00:19:12.006 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.275 19:49:53 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:19:12.275 19:49:53 -- common/autotest_common.sh@904 -- # return 0 00:19:12.275 19:49:53 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:12.275 19:49:53 -- host/discovery.sh@79 -- # expected_count=1 00:19:12.275 19:49:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:12.275 19:49:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:12.275 19:49:53 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.275 19:49:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.275 19:49:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:12.275 19:49:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:12.275 19:49:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:12.275 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.275 19:49:53 -- host/discovery.sh@74 -- # jq '. | length' 00:19:12.275 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.275 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.275 19:49:53 -- host/discovery.sh@74 -- # notification_count=1 00:19:12.275 19:49:53 -- host/discovery.sh@75 -- # notify_id=1 00:19:12.275 19:49:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:12.275 19:49:53 -- common/autotest_common.sh@904 -- # return 0 00:19:12.275 19:49:53 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:12.275 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.275 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.275 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.275 19:49:53 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:12.275 19:49:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:12.275 19:49:53 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.275 19:49:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.275 19:49:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:12.275 19:49:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:12.275 19:49:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.275 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.275 19:49:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.275 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.275 19:49:53 -- host/discovery.sh@55 -- # sort 00:19:12.275 19:49:53 -- host/discovery.sh@55 -- # xargs 00:19:12.275 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.275 19:49:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:12.275 19:49:53 -- common/autotest_common.sh@904 -- # return 0 00:19:12.275 19:49:53 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:12.275 19:49:53 -- host/discovery.sh@79 -- # expected_count=1 00:19:12.275 19:49:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:12.275 19:49:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:12.275 19:49:53 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.275 19:49:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.275 19:49:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:12.275 19:49:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:12.275 19:49:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:12.275 19:49:53 -- host/discovery.sh@74 -- # jq '. | length' 00:19:12.275 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.275 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.275 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.533 19:49:53 -- host/discovery.sh@74 -- # notification_count=1 00:19:12.533 19:49:53 -- host/discovery.sh@75 -- # notify_id=2 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:12.533 19:49:53 -- common/autotest_common.sh@904 -- # return 0 00:19:12.533 19:49:53 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:12.533 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.533 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.533 [2024-04-24 19:49:53.828131] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:12.533 [2024-04-24 19:49:53.829130] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:12.533 [2024-04-24 19:49:53.829165] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:12.533 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.533 19:49:53 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:12.533 19:49:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:12.533 19:49:53 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.533 19:49:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:12.533 19:49:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:12.533 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.533 19:49:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:12.533 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.533 19:49:53 -- host/discovery.sh@59 -- # sort 00:19:12.533 19:49:53 -- host/discovery.sh@59 -- # xargs 00:19:12.533 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.533 19:49:53 -- common/autotest_common.sh@904 -- # return 0 00:19:12.533 19:49:53 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:12.533 19:49:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:12.533 19:49:53 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.533 19:49:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:12.533 19:49:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:12.533 19:49:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:12.533 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.533 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.533 19:49:53 -- host/discovery.sh@55 -- # sort 00:19:12.533 19:49:53 -- host/discovery.sh@55 -- # xargs 00:19:12.533 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.533 [2024-04-24 19:49:53.914843] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:12.533 19:49:53 -- common/autotest_common.sh@904 -- # return 0 00:19:12.533 19:49:53 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:12.533 19:49:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:12.533 19:49:53 -- common/autotest_common.sh@901 -- # local max=10 00:19:12.533 19:49:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:12.533 19:49:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:12.533 19:49:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:12.533 19:49:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.533 19:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.533 19:49:53 -- host/discovery.sh@63 -- # sort -n 00:19:12.533 19:49:53 -- host/discovery.sh@63 -- # xargs 00:19:12.533 19:49:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.533 19:49:53 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:19:12.533 19:49:53 -- common/autotest_common.sh@906 -- # sleep 1 00:19:12.791 [2024-04-24 19:49:54.175132] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:12.791 [2024-04-24 19:49:54.175159] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:12.791 [2024-04-24 19:49:54.175170] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:13.729 19:49:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:13.729 19:49:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:13.729 19:49:54 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:13.729 19:49:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:13.729 19:49:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.729 19:49:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:13.729 19:49:54 -- common/autotest_common.sh@10 -- # set +x 00:19:13.729 19:49:54 -- host/discovery.sh@63 -- # sort -n 00:19:13.729 19:49:54 -- host/discovery.sh@63 -- # xargs 00:19:13.729 19:49:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.729 19:49:55 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:13.729 19:49:55 -- common/autotest_common.sh@904 -- # return 0 00:19:13.729 19:49:55 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:13.729 19:49:55 -- host/discovery.sh@79 -- # expected_count=0 00:19:13.729 19:49:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:13.729 19:49:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:13.729 19:49:55 -- common/autotest_common.sh@901 -- # local max=10 00:19:13.729 19:49:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:13.729 19:49:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:13.729 19:49:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:13.729 19:49:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:13.729 19:49:55 -- host/discovery.sh@74 -- # jq '. | length' 00:19:13.729 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.729 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.729 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.729 19:49:55 -- host/discovery.sh@74 -- # notification_count=0 00:19:13.729 19:49:55 -- host/discovery.sh@75 -- # notify_id=2 00:19:13.729 19:49:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:13.729 19:49:55 -- common/autotest_common.sh@904 -- # return 0 00:19:13.729 19:49:55 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:13.729 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.729 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.729 [2024-04-24 19:49:55.057027] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:13.729 [2024-04-24 19:49:55.057061] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:13.729 [2024-04-24 19:49:55.058225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.729 [2024-04-24 19:49:55.058260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.729 [2024-04-24 19:49:55.058279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.729 [2024-04-24 19:49:55.058294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.729 [2024-04-24 19:49:55.058309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.729 [2024-04-24 19:49:55.058326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.729 [2024-04-24 19:49:55.058341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:13.729 [2024-04-24 19:49:55.058355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:13.729 [2024-04-24 19:49:55.058370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232aa30 is same with the state(5) to be set 00:19:13.729 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.729 19:49:55 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:13.729 19:49:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:13.729 19:49:55 -- common/autotest_common.sh@901 -- # local max=10 00:19:13.729 19:49:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:13.729 19:49:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:13.729 19:49:55 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:13.729 19:49:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:13.729 19:49:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:13.729 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.729 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.729 19:49:55 -- host/discovery.sh@59 -- # sort 00:19:13.729 19:49:55 -- host/discovery.sh@59 -- # xargs 00:19:13.729 [2024-04-24 19:49:55.068227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232aa30 (9): Bad file descriptor 00:19:13.729 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.729 [2024-04-24 19:49:55.078275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:13.729 [2024-04-24 19:49:55.078540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.729 [2024-04-24 19:49:55.078786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.729 [2024-04-24 19:49:55.078814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232aa30 with addr=10.0.0.2, port=4420 00:19:13.729 [2024-04-24 19:49:55.078831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232aa30 is same with the state(5) to be set 00:19:13.729 [2024-04-24 19:49:55.078854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232aa30 (9): Bad file descriptor 00:19:13.729 [2024-04-24 19:49:55.078890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:13.729 [2024-04-24 19:49:55.078925] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:13.729 [2024-04-24 19:49:55.078951] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:13.729 [2024-04-24 19:49:55.078982] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.729 [2024-04-24 19:49:55.088358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:13.729 [2024-04-24 19:49:55.088619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.729 [2024-04-24 19:49:55.088819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.729 [2024-04-24 19:49:55.088848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232aa30 with addr=10.0.0.2, port=4420 00:19:13.729 [2024-04-24 19:49:55.088865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232aa30 is same with the state(5) to be set 00:19:13.729 [2024-04-24 19:49:55.088887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232aa30 (9): Bad file descriptor 00:19:13.729 [2024-04-24 19:49:55.088924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:13.729 [2024-04-24 19:49:55.088948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:13.729 [2024-04-24 19:49:55.088962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:13.729 [2024-04-24 19:49:55.088997] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.729 [2024-04-24 19:49:55.098436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:13.729 19:49:55 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.729 [2024-04-24 19:49:55.098667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.729 19:49:55 -- common/autotest_common.sh@904 -- # return 0 00:19:13.729 [2024-04-24 19:49:55.098883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.729 [2024-04-24 19:49:55.098909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232aa30 with addr=10.0.0.2, port=4420 00:19:13.730 [2024-04-24 19:49:55.098938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232aa30 is same with the state(5) to be set 00:19:13.730 [2024-04-24 19:49:55.098960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232aa30 (9): Bad file descriptor 00:19:13.730 19:49:55 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:13.730 [2024-04-24 19:49:55.098995] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:13.730 [2024-04-24 19:49:55.099014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:13.730 [2024-04-24 19:49:55.099043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:13.730 [2024-04-24 19:49:55.099066] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.730 19:49:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:13.730 19:49:55 -- common/autotest_common.sh@901 -- # local max=10 00:19:13.730 19:49:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:13.730 19:49:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:13.730 19:49:55 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:13.730 19:49:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:13.730 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.730 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.730 19:49:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:13.730 19:49:55 -- host/discovery.sh@55 -- # sort 00:19:13.730 19:49:55 -- host/discovery.sh@55 -- # xargs 00:19:13.730 [2024-04-24 19:49:55.108512] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:13.730 [2024-04-24 19:49:55.108736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.730 [2024-04-24 19:49:55.108934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.730 [2024-04-24 19:49:55.108960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232aa30 with addr=10.0.0.2, port=4420 00:19:13.730 [2024-04-24 19:49:55.108992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232aa30 is same with the state(5) to be set 00:19:13.730 [2024-04-24 19:49:55.109016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232aa30 (9): Bad file descriptor 00:19:13.730 [2024-04-24 19:49:55.109050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:13.730 [2024-04-24 19:49:55.109068] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:13.730 [2024-04-24 19:49:55.109082] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:13.730 [2024-04-24 19:49:55.109101] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.730 [2024-04-24 19:49:55.118586] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:13.730 [2024-04-24 19:49:55.118823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.730 [2024-04-24 19:49:55.118990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.730 [2024-04-24 19:49:55.119027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232aa30 with addr=10.0.0.2, port=4420 00:19:13.730 [2024-04-24 19:49:55.119043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232aa30 is same with the state(5) to be set 00:19:13.730 [2024-04-24 19:49:55.119066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232aa30 (9): Bad file descriptor 00:19:13.730 [2024-04-24 19:49:55.119125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:13.730 [2024-04-24 19:49:55.119144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:13.730 [2024-04-24 19:49:55.119157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:13.730 [2024-04-24 19:49:55.119176] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.730 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.730 [2024-04-24 19:49:55.128680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:13.730 [2024-04-24 19:49:55.128889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.730 [2024-04-24 19:49:55.129140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.730 [2024-04-24 19:49:55.129166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232aa30 with addr=10.0.0.2, port=4420 00:19:13.730 [2024-04-24 19:49:55.129183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232aa30 is same with the state(5) to be set 00:19:13.730 [2024-04-24 19:49:55.129205] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232aa30 (9): Bad file descriptor 00:19:13.730 [2024-04-24 19:49:55.129238] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:13.730 [2024-04-24 19:49:55.129256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:13.730 [2024-04-24 19:49:55.129270] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:13.730 [2024-04-24 19:49:55.129305] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.730 [2024-04-24 19:49:55.138756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:13.730 [2024-04-24 19:49:55.139005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.730 [2024-04-24 19:49:55.139220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.730 [2024-04-24 19:49:55.139249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x232aa30 with addr=10.0.0.2, port=4420 00:19:13.730 [2024-04-24 19:49:55.139272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232aa30 is same with the state(5) to be set 00:19:13.730 [2024-04-24 19:49:55.139297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232aa30 (9): Bad file descriptor 00:19:13.730 [2024-04-24 19:49:55.139347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:13.730 [2024-04-24 19:49:55.139369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:13.730 [2024-04-24 19:49:55.139385] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:13.730 19:49:55 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:13.730 [2024-04-24 19:49:55.139406] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.730 19:49:55 -- common/autotest_common.sh@904 -- # return 0 00:19:13.730 19:49:55 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:13.730 19:49:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:13.730 19:49:55 -- common/autotest_common.sh@901 -- # local max=10 00:19:13.730 19:49:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:13.730 19:49:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:13.730 19:49:55 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:13.730 19:49:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:13.730 19:49:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:13.730 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.730 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.730 19:49:55 -- host/discovery.sh@63 -- # sort -n 00:19:13.730 19:49:55 -- host/discovery.sh@63 -- # xargs 00:19:13.730 [2024-04-24 19:49:55.143351] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:13.730 [2024-04-24 19:49:55.143386] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:13.730 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.730 19:49:55 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:19:13.730 19:49:55 -- common/autotest_common.sh@904 -- # return 0 00:19:13.730 19:49:55 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:13.730 19:49:55 -- host/discovery.sh@79 -- # expected_count=0 00:19:13.730 19:49:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:13.730 19:49:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:13.730 19:49:55 -- common/autotest_common.sh@901 -- # local max=10 00:19:13.730 19:49:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:13.730 19:49:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:13.730 19:49:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:13.730 19:49:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:13.730 19:49:55 -- host/discovery.sh@74 -- # jq '. | length' 00:19:13.730 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.730 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.730 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.730 19:49:55 -- host/discovery.sh@74 -- # notification_count=0 00:19:13.730 19:49:55 -- host/discovery.sh@75 -- # notify_id=2 00:19:13.730 19:49:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:13.730 19:49:55 -- common/autotest_common.sh@904 -- # return 0 00:19:13.730 19:49:55 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:13.730 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.730 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.990 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.990 19:49:55 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:13.990 19:49:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:13.990 19:49:55 -- common/autotest_common.sh@901 -- # local max=10 00:19:13.990 19:49:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:13.990 19:49:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:13.990 19:49:55 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:13.990 19:49:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:13.990 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.990 19:49:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:13.990 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.990 19:49:55 -- host/discovery.sh@59 -- # sort 00:19:13.990 19:49:55 -- host/discovery.sh@59 -- # xargs 00:19:13.990 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.990 19:49:55 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:19:13.990 19:49:55 -- common/autotest_common.sh@904 -- # return 0 00:19:13.990 19:49:55 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:13.990 19:49:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:13.990 19:49:55 -- common/autotest_common.sh@901 -- # local max=10 00:19:13.990 19:49:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:13.990 19:49:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:13.990 19:49:55 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:13.990 19:49:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:13.990 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.990 19:49:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:13.990 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.990 19:49:55 -- host/discovery.sh@55 -- # sort 00:19:13.990 19:49:55 -- host/discovery.sh@55 -- # xargs 00:19:13.990 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.990 19:49:55 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:19:13.990 19:49:55 -- common/autotest_common.sh@904 -- # return 0 00:19:13.990 19:49:55 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:13.990 19:49:55 -- host/discovery.sh@79 -- # expected_count=2 00:19:13.990 19:49:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:13.990 19:49:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:13.990 19:49:55 -- common/autotest_common.sh@901 -- # local max=10 00:19:13.990 19:49:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:13.990 19:49:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:13.990 19:49:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:13.990 19:49:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:13.990 19:49:55 -- host/discovery.sh@74 -- # jq '. | length' 00:19:13.990 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.990 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.990 19:49:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.990 19:49:55 -- host/discovery.sh@74 -- # notification_count=2 00:19:13.990 19:49:55 -- host/discovery.sh@75 -- # notify_id=4 00:19:13.990 19:49:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:13.990 19:49:55 -- common/autotest_common.sh@904 -- # return 0 00:19:13.990 19:49:55 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:13.990 19:49:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.990 19:49:55 -- common/autotest_common.sh@10 -- # set +x 00:19:14.928 [2024-04-24 19:49:56.389060] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:14.928 [2024-04-24 19:49:56.389104] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:14.928 [2024-04-24 19:49:56.389132] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:15.188 [2024-04-24 19:49:56.476389] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:19:15.447 [2024-04-24 19:49:56.744520] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:15.447 [2024-04-24 19:49:56.744572] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:15.447 19:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.447 19:49:56 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.447 19:49:56 -- common/autotest_common.sh@638 -- # local es=0 00:19:15.447 19:49:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.447 19:49:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:15.447 19:49:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.447 19:49:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:15.447 19:49:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.447 19:49:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.447 19:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.447 19:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:15.447 request: 00:19:15.447 { 00:19:15.447 "name": "nvme", 00:19:15.447 "trtype": "tcp", 00:19:15.447 "traddr": "10.0.0.2", 00:19:15.447 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:15.447 "adrfam": "ipv4", 00:19:15.447 "trsvcid": "8009", 00:19:15.447 "wait_for_attach": true, 00:19:15.447 "method": "bdev_nvme_start_discovery", 00:19:15.447 "req_id": 1 00:19:15.447 } 00:19:15.447 Got JSON-RPC error response 00:19:15.447 response: 00:19:15.447 { 00:19:15.447 "code": -17, 00:19:15.447 "message": "File exists" 00:19:15.447 } 00:19:15.447 19:49:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:15.447 19:49:56 -- common/autotest_common.sh@641 -- # es=1 00:19:15.447 19:49:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:15.447 19:49:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:15.447 19:49:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:15.447 19:49:56 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:15.447 19:49:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:15.447 19:49:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:15.447 19:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.448 19:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:15.448 19:49:56 -- host/discovery.sh@67 -- # sort 00:19:15.448 19:49:56 -- host/discovery.sh@67 -- # xargs 00:19:15.448 19:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.448 19:49:56 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:15.448 19:49:56 -- host/discovery.sh@146 -- # get_bdev_list 00:19:15.448 19:49:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:15.448 19:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.448 19:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:15.448 19:49:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:15.448 19:49:56 -- host/discovery.sh@55 -- # sort 00:19:15.448 19:49:56 -- host/discovery.sh@55 -- # xargs 00:19:15.448 19:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.448 19:49:56 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:15.448 19:49:56 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.448 19:49:56 -- common/autotest_common.sh@638 -- # local es=0 00:19:15.448 19:49:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.448 19:49:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:15.448 19:49:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.448 19:49:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:15.448 19:49:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.448 19:49:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:15.448 19:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.448 19:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:15.448 request: 00:19:15.448 { 00:19:15.448 "name": "nvme_second", 00:19:15.448 "trtype": "tcp", 00:19:15.448 "traddr": "10.0.0.2", 00:19:15.448 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:15.448 "adrfam": "ipv4", 00:19:15.448 "trsvcid": "8009", 00:19:15.448 "wait_for_attach": true, 00:19:15.448 "method": "bdev_nvme_start_discovery", 00:19:15.448 "req_id": 1 00:19:15.448 } 00:19:15.448 Got JSON-RPC error response 00:19:15.448 response: 00:19:15.448 { 00:19:15.448 "code": -17, 00:19:15.448 "message": "File exists" 00:19:15.448 } 00:19:15.448 19:49:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:15.448 19:49:56 -- common/autotest_common.sh@641 -- # es=1 00:19:15.448 19:49:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:15.448 19:49:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:15.448 19:49:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:15.448 19:49:56 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:15.448 19:49:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:15.448 19:49:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:15.448 19:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.448 19:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:15.448 19:49:56 -- host/discovery.sh@67 -- # sort 00:19:15.448 19:49:56 -- host/discovery.sh@67 -- # xargs 00:19:15.448 19:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.448 19:49:56 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:15.448 19:49:56 -- host/discovery.sh@152 -- # get_bdev_list 00:19:15.448 19:49:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:15.448 19:49:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:15.448 19:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.448 19:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:15.448 19:49:56 -- host/discovery.sh@55 -- # sort 00:19:15.448 19:49:56 -- host/discovery.sh@55 -- # xargs 00:19:15.448 19:49:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.448 19:49:56 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:15.448 19:49:56 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:15.448 19:49:56 -- common/autotest_common.sh@638 -- # local es=0 00:19:15.448 19:49:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:15.448 19:49:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:15.448 19:49:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.448 19:49:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:15.448 19:49:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.448 19:49:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:15.448 19:49:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.448 19:49:56 -- common/autotest_common.sh@10 -- # set +x 00:19:16.839 [2024-04-24 19:49:57.948537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.839 [2024-04-24 19:49:57.948757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.839 [2024-04-24 19:49:57.948786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2503280 with addr=10.0.0.2, port=8010 00:19:16.839 [2024-04-24 19:49:57.948818] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:16.839 [2024-04-24 19:49:57.948835] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:16.839 [2024-04-24 19:49:57.948848] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:17.778 [2024-04-24 19:49:58.950899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.778 [2024-04-24 19:49:58.951143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:17.778 [2024-04-24 19:49:58.951173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2503280 with addr=10.0.0.2, port=8010 00:19:17.778 [2024-04-24 19:49:58.951195] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:17.778 [2024-04-24 19:49:58.951211] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:17.778 [2024-04-24 19:49:58.951225] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:18.718 [2024-04-24 19:49:59.953073] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:18.718 request: 00:19:18.718 { 00:19:18.718 "name": "nvme_second", 00:19:18.718 "trtype": "tcp", 00:19:18.718 "traddr": "10.0.0.2", 00:19:18.718 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:18.718 "adrfam": "ipv4", 00:19:18.718 "trsvcid": "8010", 00:19:18.718 "attach_timeout_ms": 3000, 00:19:18.718 "method": "bdev_nvme_start_discovery", 00:19:18.718 "req_id": 1 00:19:18.718 } 00:19:18.718 Got JSON-RPC error response 00:19:18.718 response: 00:19:18.718 { 00:19:18.718 "code": -110, 00:19:18.718 "message": "Connection timed out" 00:19:18.718 } 00:19:18.718 19:49:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:18.718 19:49:59 -- common/autotest_common.sh@641 -- # es=1 00:19:18.718 19:49:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:18.718 19:49:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:18.718 19:49:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:18.718 19:49:59 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:18.718 19:49:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:18.718 19:49:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.718 19:49:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:18.718 19:49:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.718 19:49:59 -- host/discovery.sh@67 -- # sort 00:19:18.718 19:49:59 -- host/discovery.sh@67 -- # xargs 00:19:18.718 19:49:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.718 19:50:00 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:18.718 19:50:00 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:18.718 19:50:00 -- host/discovery.sh@161 -- # kill 1749054 00:19:18.718 19:50:00 -- host/discovery.sh@162 -- # nvmftestfini 00:19:18.718 19:50:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:18.718 19:50:00 -- nvmf/common.sh@117 -- # sync 00:19:18.718 19:50:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.718 19:50:00 -- nvmf/common.sh@120 -- # set +e 00:19:18.718 19:50:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.718 19:50:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.718 rmmod nvme_tcp 00:19:18.718 rmmod nvme_fabrics 00:19:18.718 rmmod nvme_keyring 00:19:18.718 19:50:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.718 19:50:00 -- nvmf/common.sh@124 -- # set -e 00:19:18.718 19:50:00 -- nvmf/common.sh@125 -- # return 0 00:19:18.718 19:50:00 -- nvmf/common.sh@478 -- # '[' -n 1748896 ']' 00:19:18.718 19:50:00 -- nvmf/common.sh@479 -- # killprocess 1748896 00:19:18.718 19:50:00 -- common/autotest_common.sh@936 -- # '[' -z 1748896 ']' 00:19:18.718 19:50:00 -- common/autotest_common.sh@940 -- # kill -0 1748896 00:19:18.718 19:50:00 -- common/autotest_common.sh@941 -- # uname 00:19:18.718 19:50:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:18.718 19:50:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1748896 00:19:18.718 19:50:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:18.718 19:50:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:18.718 19:50:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1748896' 00:19:18.718 killing process with pid 1748896 00:19:18.718 19:50:00 -- common/autotest_common.sh@955 -- # kill 1748896 00:19:18.718 19:50:00 -- common/autotest_common.sh@960 -- # wait 1748896 00:19:18.977 19:50:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:18.977 19:50:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:18.977 19:50:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:18.977 19:50:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.977 19:50:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.977 19:50:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.977 19:50:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.977 19:50:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.518 19:50:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.518 00:19:21.518 real 0m14.775s 00:19:21.518 user 0m21.855s 00:19:21.518 sys 0m2.953s 00:19:21.518 19:50:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:21.518 19:50:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.518 ************************************ 00:19:21.518 END TEST nvmf_discovery 00:19:21.518 ************************************ 00:19:21.518 19:50:02 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:21.518 19:50:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:21.518 19:50:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:21.518 19:50:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.518 ************************************ 00:19:21.518 START TEST nvmf_discovery_remove_ifc 00:19:21.518 ************************************ 00:19:21.518 19:50:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:21.518 * Looking for test storage... 00:19:21.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:21.518 19:50:02 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.518 19:50:02 -- nvmf/common.sh@7 -- # uname -s 00:19:21.518 19:50:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.518 19:50:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.518 19:50:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.518 19:50:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.518 19:50:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.518 19:50:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.518 19:50:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.518 19:50:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.518 19:50:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.518 19:50:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.518 19:50:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.518 19:50:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.518 19:50:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.518 19:50:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.518 19:50:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.518 19:50:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.518 19:50:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.518 19:50:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.518 19:50:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.518 19:50:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.518 19:50:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.518 19:50:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.518 19:50:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.518 19:50:02 -- paths/export.sh@5 -- # export PATH 00:19:21.519 19:50:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.519 19:50:02 -- nvmf/common.sh@47 -- # : 0 00:19:21.519 19:50:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:21.519 19:50:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:21.519 19:50:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.519 19:50:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.519 19:50:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.519 19:50:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:21.519 19:50:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:21.519 19:50:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:21.519 19:50:02 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:21.519 19:50:02 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:21.519 19:50:02 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:21.519 19:50:02 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:21.519 19:50:02 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:21.519 19:50:02 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:21.519 19:50:02 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:21.519 19:50:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:21.519 19:50:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.519 19:50:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:21.519 19:50:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:21.519 19:50:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:21.519 19:50:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.519 19:50:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.519 19:50:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.519 19:50:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:21.519 19:50:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:21.519 19:50:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:21.519 19:50:02 -- common/autotest_common.sh@10 -- # set +x 00:19:23.426 19:50:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:23.426 19:50:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.426 19:50:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.426 19:50:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.426 19:50:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.426 19:50:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.426 19:50:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.426 19:50:04 -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.426 19:50:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.426 19:50:04 -- nvmf/common.sh@296 -- # e810=() 00:19:23.426 19:50:04 -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.426 19:50:04 -- nvmf/common.sh@297 -- # x722=() 00:19:23.426 19:50:04 -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.426 19:50:04 -- nvmf/common.sh@298 -- # mlx=() 00:19:23.427 19:50:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.427 19:50:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.427 19:50:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.427 19:50:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:23.427 19:50:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.427 19:50:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.427 19:50:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:23.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:23.427 19:50:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.427 19:50:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:23.427 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:23.427 19:50:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.427 19:50:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.427 19:50:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.427 19:50:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:23.427 19:50:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.427 19:50:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:23.427 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:23.427 19:50:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.427 19:50:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.427 19:50:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.427 19:50:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:23.427 19:50:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.427 19:50:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:23.427 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:23.427 19:50:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.427 19:50:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:23.427 19:50:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:23.427 19:50:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:23.427 19:50:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.427 19:50:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.427 19:50:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.427 19:50:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:23.427 19:50:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.427 19:50:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.427 19:50:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:23.427 19:50:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.427 19:50:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.427 19:50:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:23.427 19:50:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:23.427 19:50:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.427 19:50:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.427 19:50:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.427 19:50:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.427 19:50:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:23.427 19:50:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.427 19:50:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.427 19:50:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.427 19:50:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:19:23.427 00:19:23.427 --- 10.0.0.2 ping statistics --- 00:19:23.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.427 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:19:23.427 19:50:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:19:23.427 00:19:23.427 --- 10.0.0.1 ping statistics --- 00:19:23.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.427 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:23.427 19:50:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.427 19:50:04 -- nvmf/common.sh@411 -- # return 0 00:19:23.427 19:50:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:23.427 19:50:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.427 19:50:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:23.427 19:50:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.427 19:50:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:23.427 19:50:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:23.427 19:50:04 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:23.427 19:50:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:23.427 19:50:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:23.427 19:50:04 -- common/autotest_common.sh@10 -- # set +x 00:19:23.427 19:50:04 -- nvmf/common.sh@470 -- # nvmfpid=1752223 00:19:23.427 19:50:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:23.427 19:50:04 -- nvmf/common.sh@471 -- # waitforlisten 1752223 00:19:23.427 19:50:04 -- common/autotest_common.sh@817 -- # '[' -z 1752223 ']' 00:19:23.427 19:50:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.427 19:50:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:23.427 19:50:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.427 19:50:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:23.427 19:50:04 -- common/autotest_common.sh@10 -- # set +x 00:19:23.427 [2024-04-24 19:50:04.719235] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:19:23.427 [2024-04-24 19:50:04.719298] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.427 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.427 [2024-04-24 19:50:04.786327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.427 [2024-04-24 19:50:04.904307] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.427 [2024-04-24 19:50:04.904372] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.427 [2024-04-24 19:50:04.904395] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.427 [2024-04-24 19:50:04.904409] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.427 [2024-04-24 19:50:04.904420] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.427 [2024-04-24 19:50:04.904453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.394 19:50:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:24.394 19:50:05 -- common/autotest_common.sh@850 -- # return 0 00:19:24.394 19:50:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:24.394 19:50:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:24.394 19:50:05 -- common/autotest_common.sh@10 -- # set +x 00:19:24.394 19:50:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.394 19:50:05 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:24.394 19:50:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.394 19:50:05 -- common/autotest_common.sh@10 -- # set +x 00:19:24.394 [2024-04-24 19:50:05.689292] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.394 [2024-04-24 19:50:05.697442] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:24.394 null0 00:19:24.394 [2024-04-24 19:50:05.729426] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.394 19:50:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.394 19:50:05 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1752371 00:19:24.394 19:50:05 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:24.394 19:50:05 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1752371 /tmp/host.sock 00:19:24.394 19:50:05 -- common/autotest_common.sh@817 -- # '[' -z 1752371 ']' 00:19:24.394 19:50:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:19:24.394 19:50:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:24.394 19:50:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:24.394 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:24.394 19:50:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:24.394 19:50:05 -- common/autotest_common.sh@10 -- # set +x 00:19:24.394 [2024-04-24 19:50:05.796307] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:19:24.394 [2024-04-24 19:50:05.796383] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752371 ] 00:19:24.394 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.394 [2024-04-24 19:50:05.866034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.653 [2024-04-24 19:50:05.993445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.653 19:50:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:24.653 19:50:06 -- common/autotest_common.sh@850 -- # return 0 00:19:24.653 19:50:06 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:24.653 19:50:06 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:24.653 19:50:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.653 19:50:06 -- common/autotest_common.sh@10 -- # set +x 00:19:24.653 19:50:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.653 19:50:06 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:24.653 19:50:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.653 19:50:06 -- common/autotest_common.sh@10 -- # set +x 00:19:24.653 19:50:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.653 19:50:06 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:24.653 19:50:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.653 19:50:06 -- common/autotest_common.sh@10 -- # set +x 00:19:26.029 [2024-04-24 19:50:07.147892] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:26.029 [2024-04-24 19:50:07.147918] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:26.029 [2024-04-24 19:50:07.147967] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:26.029 [2024-04-24 19:50:07.234270] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:26.029 [2024-04-24 19:50:07.458478] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:26.029 [2024-04-24 19:50:07.458545] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:26.029 [2024-04-24 19:50:07.458588] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:26.029 [2024-04-24 19:50:07.458615] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:26.029 [2024-04-24 19:50:07.458651] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:26.029 19:50:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.029 19:50:07 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:26.029 19:50:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.029 19:50:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.029 19:50:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.030 19:50:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.030 19:50:07 -- common/autotest_common.sh@10 -- # set +x 00:19:26.030 19:50:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.030 19:50:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.030 [2024-04-24 19:50:07.466078] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd022d0 was disconnected and freed. delete nvme_qpair. 00:19:26.030 19:50:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.030 19:50:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:26.030 19:50:07 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:19:26.030 19:50:07 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:19:26.288 19:50:07 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:26.289 19:50:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.289 19:50:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.289 19:50:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.289 19:50:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.289 19:50:07 -- common/autotest_common.sh@10 -- # set +x 00:19:26.289 19:50:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.289 19:50:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.289 19:50:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.289 19:50:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:26.289 19:50:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:27.225 19:50:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:27.225 19:50:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:27.225 19:50:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:27.225 19:50:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.225 19:50:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:27.225 19:50:08 -- common/autotest_common.sh@10 -- # set +x 00:19:27.225 19:50:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:27.225 19:50:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.225 19:50:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:27.225 19:50:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:28.161 19:50:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:28.161 19:50:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:28.161 19:50:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:28.161 19:50:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.161 19:50:09 -- common/autotest_common.sh@10 -- # set +x 00:19:28.161 19:50:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:28.161 19:50:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:28.161 19:50:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.420 19:50:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:28.421 19:50:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:29.357 19:50:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.357 19:50:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.357 19:50:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.357 19:50:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.357 19:50:10 -- common/autotest_common.sh@10 -- # set +x 00:19:29.357 19:50:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.357 19:50:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.357 19:50:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.357 19:50:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:29.357 19:50:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:30.298 19:50:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.298 19:50:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.298 19:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.298 19:50:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.298 19:50:11 -- common/autotest_common.sh@10 -- # set +x 00:19:30.298 19:50:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.298 19:50:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.298 19:50:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.298 19:50:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:30.298 19:50:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:31.267 19:50:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:31.267 19:50:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:31.267 19:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.267 19:50:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:31.267 19:50:12 -- common/autotest_common.sh@10 -- # set +x 00:19:31.267 19:50:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:31.267 19:50:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:31.528 19:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.528 19:50:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:31.528 19:50:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:31.528 [2024-04-24 19:50:12.899444] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:31.528 [2024-04-24 19:50:12.899517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.528 [2024-04-24 19:50:12.899542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.528 [2024-04-24 19:50:12.899563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.528 [2024-04-24 19:50:12.899579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.528 [2024-04-24 19:50:12.899595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.528 [2024-04-24 19:50:12.899611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.528 [2024-04-24 19:50:12.899634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.528 [2024-04-24 19:50:12.899651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.528 [2024-04-24 19:50:12.899691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.528 [2024-04-24 19:50:12.899704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.528 [2024-04-24 19:50:12.899716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc87b0 is same with the state(5) to be set 00:19:31.528 [2024-04-24 19:50:12.909464] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc87b0 (9): Bad file descriptor 00:19:31.528 [2024-04-24 19:50:12.919514] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:32.467 19:50:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:32.467 19:50:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.467 19:50:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:32.467 19:50:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.467 19:50:13 -- common/autotest_common.sh@10 -- # set +x 00:19:32.467 19:50:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:32.467 19:50:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:32.467 [2024-04-24 19:50:13.965695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:33.846 [2024-04-24 19:50:14.989658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:33.846 [2024-04-24 19:50:14.989719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc87b0 with addr=10.0.0.2, port=4420 00:19:33.846 [2024-04-24 19:50:14.989749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc87b0 is same with the state(5) to be set 00:19:33.846 [2024-04-24 19:50:14.990243] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc87b0 (9): Bad file descriptor 00:19:33.846 [2024-04-24 19:50:14.990291] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:33.846 [2024-04-24 19:50:14.990346] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:33.846 [2024-04-24 19:50:14.990390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.846 [2024-04-24 19:50:14.990415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.846 [2024-04-24 19:50:14.990437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.846 [2024-04-24 19:50:14.990453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.846 [2024-04-24 19:50:14.990468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.846 [2024-04-24 19:50:14.990483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.846 [2024-04-24 19:50:14.990499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.846 [2024-04-24 19:50:14.990513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.846 [2024-04-24 19:50:14.990529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.846 [2024-04-24 19:50:14.990543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.846 [2024-04-24 19:50:14.990558] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:33.846 [2024-04-24 19:50:14.990786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc8bc0 (9): Bad file descriptor 00:19:33.846 [2024-04-24 19:50:14.991804] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:33.846 [2024-04-24 19:50:14.991826] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:33.846 19:50:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:33.846 19:50:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:33.846 19:50:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.786 19:50:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.786 19:50:16 -- common/autotest_common.sh@10 -- # set +x 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.786 19:50:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.786 19:50:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.786 19:50:16 -- common/autotest_common.sh@10 -- # set +x 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.786 19:50:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:34.786 19:50:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:35.721 [2024-04-24 19:50:17.003984] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:35.721 [2024-04-24 19:50:17.004009] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:35.721 [2024-04-24 19:50:17.004049] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:35.721 [2024-04-24 19:50:17.130457] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:35.721 19:50:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:35.721 19:50:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.721 19:50:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:35.721 19:50:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:35.721 19:50:17 -- common/autotest_common.sh@10 -- # set +x 00:19:35.721 19:50:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:35.721 19:50:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:35.721 19:50:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:35.721 19:50:17 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:35.721 19:50:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:35.721 [2024-04-24 19:50:17.193593] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:35.721 [2024-04-24 19:50:17.193657] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:35.721 [2024-04-24 19:50:17.193711] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:35.721 [2024-04-24 19:50:17.193734] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:35.721 [2024-04-24 19:50:17.193747] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:35.721 [2024-04-24 19:50:17.202126] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xcd8f40 was disconnected and freed. delete nvme_qpair. 00:19:37.102 19:50:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:37.102 19:50:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.102 19:50:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:37.102 19:50:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.102 19:50:18 -- common/autotest_common.sh@10 -- # set +x 00:19:37.102 19:50:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:37.102 19:50:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:37.102 19:50:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.102 19:50:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:37.102 19:50:18 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:37.102 19:50:18 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1752371 00:19:37.102 19:50:18 -- common/autotest_common.sh@936 -- # '[' -z 1752371 ']' 00:19:37.102 19:50:18 -- common/autotest_common.sh@940 -- # kill -0 1752371 00:19:37.102 19:50:18 -- common/autotest_common.sh@941 -- # uname 00:19:37.102 19:50:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:37.102 19:50:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1752371 00:19:37.102 19:50:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:37.102 19:50:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:37.102 19:50:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1752371' 00:19:37.102 killing process with pid 1752371 00:19:37.102 19:50:18 -- common/autotest_common.sh@955 -- # kill 1752371 00:19:37.102 19:50:18 -- common/autotest_common.sh@960 -- # wait 1752371 00:19:37.102 19:50:18 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:37.102 19:50:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:37.102 19:50:18 -- nvmf/common.sh@117 -- # sync 00:19:37.102 19:50:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:37.102 19:50:18 -- nvmf/common.sh@120 -- # set +e 00:19:37.102 19:50:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:37.102 19:50:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:37.102 rmmod nvme_tcp 00:19:37.102 rmmod nvme_fabrics 00:19:37.102 rmmod nvme_keyring 00:19:37.102 19:50:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:37.102 19:50:18 -- nvmf/common.sh@124 -- # set -e 00:19:37.102 19:50:18 -- nvmf/common.sh@125 -- # return 0 00:19:37.102 19:50:18 -- nvmf/common.sh@478 -- # '[' -n 1752223 ']' 00:19:37.102 19:50:18 -- nvmf/common.sh@479 -- # killprocess 1752223 00:19:37.102 19:50:18 -- common/autotest_common.sh@936 -- # '[' -z 1752223 ']' 00:19:37.102 19:50:18 -- common/autotest_common.sh@940 -- # kill -0 1752223 00:19:37.102 19:50:18 -- common/autotest_common.sh@941 -- # uname 00:19:37.102 19:50:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:37.102 19:50:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1752223 00:19:37.363 19:50:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:37.363 19:50:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:37.363 19:50:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1752223' 00:19:37.363 killing process with pid 1752223 00:19:37.363 19:50:18 -- common/autotest_common.sh@955 -- # kill 1752223 00:19:37.363 19:50:18 -- common/autotest_common.sh@960 -- # wait 1752223 00:19:37.624 19:50:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:37.624 19:50:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:37.624 19:50:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:37.624 19:50:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:37.624 19:50:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:37.624 19:50:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.624 19:50:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:37.624 19:50:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.532 19:50:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:39.532 00:19:39.532 real 0m18.395s 00:19:39.532 user 0m25.610s 00:19:39.532 sys 0m2.981s 00:19:39.532 19:50:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:39.532 19:50:20 -- common/autotest_common.sh@10 -- # set +x 00:19:39.532 ************************************ 00:19:39.532 END TEST nvmf_discovery_remove_ifc 00:19:39.532 ************************************ 00:19:39.532 19:50:20 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:39.532 19:50:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:39.532 19:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:39.532 19:50:20 -- common/autotest_common.sh@10 -- # set +x 00:19:39.792 ************************************ 00:19:39.792 START TEST nvmf_identify_kernel_target 00:19:39.792 ************************************ 00:19:39.792 19:50:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:39.792 * Looking for test storage... 00:19:39.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:39.792 19:50:21 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.792 19:50:21 -- nvmf/common.sh@7 -- # uname -s 00:19:39.792 19:50:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.792 19:50:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.792 19:50:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.792 19:50:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.792 19:50:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.792 19:50:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.792 19:50:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.792 19:50:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.792 19:50:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.792 19:50:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.792 19:50:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.792 19:50:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.792 19:50:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.792 19:50:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.792 19:50:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.792 19:50:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.792 19:50:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.792 19:50:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.792 19:50:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.792 19:50:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.792 19:50:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.792 19:50:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.792 19:50:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.792 19:50:21 -- paths/export.sh@5 -- # export PATH 00:19:39.792 19:50:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.792 19:50:21 -- nvmf/common.sh@47 -- # : 0 00:19:39.792 19:50:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.792 19:50:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.792 19:50:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.792 19:50:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.792 19:50:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.792 19:50:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.792 19:50:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.792 19:50:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.792 19:50:21 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:39.792 19:50:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:39.792 19:50:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.792 19:50:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:39.792 19:50:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:39.792 19:50:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:39.792 19:50:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.792 19:50:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.792 19:50:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.792 19:50:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:39.792 19:50:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:39.792 19:50:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.792 19:50:21 -- common/autotest_common.sh@10 -- # set +x 00:19:41.694 19:50:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:41.694 19:50:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:41.694 19:50:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:41.694 19:50:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:41.694 19:50:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:41.694 19:50:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:41.694 19:50:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:41.694 19:50:22 -- nvmf/common.sh@295 -- # net_devs=() 00:19:41.694 19:50:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:41.694 19:50:22 -- nvmf/common.sh@296 -- # e810=() 00:19:41.694 19:50:22 -- nvmf/common.sh@296 -- # local -ga e810 00:19:41.694 19:50:22 -- nvmf/common.sh@297 -- # x722=() 00:19:41.694 19:50:22 -- nvmf/common.sh@297 -- # local -ga x722 00:19:41.694 19:50:22 -- nvmf/common.sh@298 -- # mlx=() 00:19:41.694 19:50:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:41.694 19:50:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.694 19:50:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:41.694 19:50:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:41.694 19:50:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:41.694 19:50:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.694 19:50:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:41.694 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:41.694 19:50:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.694 19:50:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:41.694 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:41.694 19:50:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:41.694 19:50:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.694 19:50:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.694 19:50:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:41.694 19:50:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.694 19:50:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:41.694 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:41.694 19:50:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.694 19:50:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.694 19:50:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.694 19:50:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:41.694 19:50:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.694 19:50:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:41.694 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:41.694 19:50:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.694 19:50:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:41.694 19:50:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:41.694 19:50:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:41.694 19:50:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:41.694 19:50:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.694 19:50:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.694 19:50:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.694 19:50:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:41.694 19:50:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.694 19:50:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.694 19:50:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:41.694 19:50:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.694 19:50:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.694 19:50:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:41.694 19:50:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:41.694 19:50:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.694 19:50:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.694 19:50:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.694 19:50:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.694 19:50:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:41.694 19:50:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.694 19:50:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.694 19:50:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.694 19:50:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:41.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:19:41.694 00:19:41.694 --- 10.0.0.2 ping statistics --- 00:19:41.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.694 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:19:41.694 19:50:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:19:41.694 00:19:41.694 --- 10.0.0.1 ping statistics --- 00:19:41.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.694 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:19:41.694 19:50:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.694 19:50:23 -- nvmf/common.sh@411 -- # return 0 00:19:41.694 19:50:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:41.694 19:50:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.694 19:50:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:41.694 19:50:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:41.694 19:50:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.694 19:50:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:41.694 19:50:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:41.694 19:50:23 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:41.694 19:50:23 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:41.694 19:50:23 -- nvmf/common.sh@717 -- # local ip 00:19:41.694 19:50:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:41.694 19:50:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:41.694 19:50:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.694 19:50:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.694 19:50:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:41.694 19:50:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.694 19:50:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:41.694 19:50:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:41.694 19:50:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:41.694 19:50:23 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:41.694 19:50:23 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:41.694 19:50:23 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:41.694 19:50:23 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:41.694 19:50:23 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:41.694 19:50:23 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:41.694 19:50:23 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:41.694 19:50:23 -- nvmf/common.sh@628 -- # local block nvme 00:19:41.694 19:50:23 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:41.694 19:50:23 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:41.694 19:50:23 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:41.694 19:50:23 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:43.071 Waiting for block devices as requested 00:19:43.071 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:19:43.071 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:43.071 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:43.071 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:43.331 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:43.331 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:43.331 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:43.331 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:43.590 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:43.590 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:43.590 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:43.590 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:43.849 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:43.849 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:43.849 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:43.849 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:44.107 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:44.107 19:50:25 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:44.107 19:50:25 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:44.107 19:50:25 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:44.107 19:50:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:44.107 19:50:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:44.107 19:50:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:44.107 19:50:25 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:44.107 19:50:25 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:44.107 19:50:25 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:44.107 No valid GPT data, bailing 00:19:44.107 19:50:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:44.107 19:50:25 -- scripts/common.sh@391 -- # pt= 00:19:44.107 19:50:25 -- scripts/common.sh@392 -- # return 1 00:19:44.107 19:50:25 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:44.107 19:50:25 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:44.107 19:50:25 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:44.107 19:50:25 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:44.107 19:50:25 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:44.107 19:50:25 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:44.107 19:50:25 -- nvmf/common.sh@656 -- # echo 1 00:19:44.107 19:50:25 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:44.107 19:50:25 -- nvmf/common.sh@658 -- # echo 1 00:19:44.107 19:50:25 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:19:44.107 19:50:25 -- nvmf/common.sh@661 -- # echo tcp 00:19:44.107 19:50:25 -- nvmf/common.sh@662 -- # echo 4420 00:19:44.107 19:50:25 -- nvmf/common.sh@663 -- # echo ipv4 00:19:44.107 19:50:25 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:44.367 19:50:25 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:19:44.367 00:19:44.367 Discovery Log Number of Records 2, Generation counter 2 00:19:44.367 =====Discovery Log Entry 0====== 00:19:44.367 trtype: tcp 00:19:44.367 adrfam: ipv4 00:19:44.367 subtype: current discovery subsystem 00:19:44.367 treq: not specified, sq flow control disable supported 00:19:44.367 portid: 1 00:19:44.367 trsvcid: 4420 00:19:44.367 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:44.367 traddr: 10.0.0.1 00:19:44.367 eflags: none 00:19:44.367 sectype: none 00:19:44.367 =====Discovery Log Entry 1====== 00:19:44.367 trtype: tcp 00:19:44.367 adrfam: ipv4 00:19:44.367 subtype: nvme subsystem 00:19:44.367 treq: not specified, sq flow control disable supported 00:19:44.367 portid: 1 00:19:44.367 trsvcid: 4420 00:19:44.367 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:44.367 traddr: 10.0.0.1 00:19:44.367 eflags: none 00:19:44.367 sectype: none 00:19:44.367 19:50:25 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:44.367 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:44.367 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.367 ===================================================== 00:19:44.367 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:44.367 ===================================================== 00:19:44.367 Controller Capabilities/Features 00:19:44.367 ================================ 00:19:44.367 Vendor ID: 0000 00:19:44.367 Subsystem Vendor ID: 0000 00:19:44.367 Serial Number: caf65bd01cf720422eea 00:19:44.367 Model Number: Linux 00:19:44.367 Firmware Version: 6.7.0-68 00:19:44.367 Recommended Arb Burst: 0 00:19:44.367 IEEE OUI Identifier: 00 00 00 00:19:44.367 Multi-path I/O 00:19:44.367 May have multiple subsystem ports: No 00:19:44.367 May have multiple controllers: No 00:19:44.367 Associated with SR-IOV VF: No 00:19:44.367 Max Data Transfer Size: Unlimited 00:19:44.367 Max Number of Namespaces: 0 00:19:44.367 Max Number of I/O Queues: 1024 00:19:44.367 NVMe Specification Version (VS): 1.3 00:19:44.367 NVMe Specification Version (Identify): 1.3 00:19:44.367 Maximum Queue Entries: 1024 00:19:44.367 Contiguous Queues Required: No 00:19:44.367 Arbitration Mechanisms Supported 00:19:44.367 Weighted Round Robin: Not Supported 00:19:44.367 Vendor Specific: Not Supported 00:19:44.367 Reset Timeout: 7500 ms 00:19:44.367 Doorbell Stride: 4 bytes 00:19:44.367 NVM Subsystem Reset: Not Supported 00:19:44.367 Command Sets Supported 00:19:44.367 NVM Command Set: Supported 00:19:44.367 Boot Partition: Not Supported 00:19:44.367 Memory Page Size Minimum: 4096 bytes 00:19:44.367 Memory Page Size Maximum: 4096 bytes 00:19:44.367 Persistent Memory Region: Not Supported 00:19:44.367 Optional Asynchronous Events Supported 00:19:44.367 Namespace Attribute Notices: Not Supported 00:19:44.367 Firmware Activation Notices: Not Supported 00:19:44.367 ANA Change Notices: Not Supported 00:19:44.367 PLE Aggregate Log Change Notices: Not Supported 00:19:44.367 LBA Status Info Alert Notices: Not Supported 00:19:44.367 EGE Aggregate Log Change Notices: Not Supported 00:19:44.367 Normal NVM Subsystem Shutdown event: Not Supported 00:19:44.367 Zone Descriptor Change Notices: Not Supported 00:19:44.367 Discovery Log Change Notices: Supported 00:19:44.367 Controller Attributes 00:19:44.367 128-bit Host Identifier: Not Supported 00:19:44.367 Non-Operational Permissive Mode: Not Supported 00:19:44.367 NVM Sets: Not Supported 00:19:44.367 Read Recovery Levels: Not Supported 00:19:44.367 Endurance Groups: Not Supported 00:19:44.367 Predictable Latency Mode: Not Supported 00:19:44.367 Traffic Based Keep ALive: Not Supported 00:19:44.367 Namespace Granularity: Not Supported 00:19:44.367 SQ Associations: Not Supported 00:19:44.367 UUID List: Not Supported 00:19:44.367 Multi-Domain Subsystem: Not Supported 00:19:44.367 Fixed Capacity Management: Not Supported 00:19:44.367 Variable Capacity Management: Not Supported 00:19:44.367 Delete Endurance Group: Not Supported 00:19:44.367 Delete NVM Set: Not Supported 00:19:44.367 Extended LBA Formats Supported: Not Supported 00:19:44.367 Flexible Data Placement Supported: Not Supported 00:19:44.367 00:19:44.367 Controller Memory Buffer Support 00:19:44.367 ================================ 00:19:44.367 Supported: No 00:19:44.367 00:19:44.367 Persistent Memory Region Support 00:19:44.367 ================================ 00:19:44.367 Supported: No 00:19:44.367 00:19:44.367 Admin Command Set Attributes 00:19:44.367 ============================ 00:19:44.367 Security Send/Receive: Not Supported 00:19:44.367 Format NVM: Not Supported 00:19:44.367 Firmware Activate/Download: Not Supported 00:19:44.367 Namespace Management: Not Supported 00:19:44.367 Device Self-Test: Not Supported 00:19:44.367 Directives: Not Supported 00:19:44.367 NVMe-MI: Not Supported 00:19:44.367 Virtualization Management: Not Supported 00:19:44.367 Doorbell Buffer Config: Not Supported 00:19:44.367 Get LBA Status Capability: Not Supported 00:19:44.367 Command & Feature Lockdown Capability: Not Supported 00:19:44.367 Abort Command Limit: 1 00:19:44.367 Async Event Request Limit: 1 00:19:44.367 Number of Firmware Slots: N/A 00:19:44.367 Firmware Slot 1 Read-Only: N/A 00:19:44.367 Firmware Activation Without Reset: N/A 00:19:44.367 Multiple Update Detection Support: N/A 00:19:44.367 Firmware Update Granularity: No Information Provided 00:19:44.367 Per-Namespace SMART Log: No 00:19:44.367 Asymmetric Namespace Access Log Page: Not Supported 00:19:44.367 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:44.367 Command Effects Log Page: Not Supported 00:19:44.367 Get Log Page Extended Data: Supported 00:19:44.367 Telemetry Log Pages: Not Supported 00:19:44.367 Persistent Event Log Pages: Not Supported 00:19:44.367 Supported Log Pages Log Page: May Support 00:19:44.367 Commands Supported & Effects Log Page: Not Supported 00:19:44.367 Feature Identifiers & Effects Log Page:May Support 00:19:44.367 NVMe-MI Commands & Effects Log Page: May Support 00:19:44.367 Data Area 4 for Telemetry Log: Not Supported 00:19:44.367 Error Log Page Entries Supported: 1 00:19:44.367 Keep Alive: Not Supported 00:19:44.367 00:19:44.367 NVM Command Set Attributes 00:19:44.367 ========================== 00:19:44.367 Submission Queue Entry Size 00:19:44.367 Max: 1 00:19:44.367 Min: 1 00:19:44.367 Completion Queue Entry Size 00:19:44.367 Max: 1 00:19:44.367 Min: 1 00:19:44.367 Number of Namespaces: 0 00:19:44.367 Compare Command: Not Supported 00:19:44.367 Write Uncorrectable Command: Not Supported 00:19:44.367 Dataset Management Command: Not Supported 00:19:44.367 Write Zeroes Command: Not Supported 00:19:44.367 Set Features Save Field: Not Supported 00:19:44.367 Reservations: Not Supported 00:19:44.367 Timestamp: Not Supported 00:19:44.367 Copy: Not Supported 00:19:44.367 Volatile Write Cache: Not Present 00:19:44.367 Atomic Write Unit (Normal): 1 00:19:44.367 Atomic Write Unit (PFail): 1 00:19:44.367 Atomic Compare & Write Unit: 1 00:19:44.367 Fused Compare & Write: Not Supported 00:19:44.367 Scatter-Gather List 00:19:44.367 SGL Command Set: Supported 00:19:44.367 SGL Keyed: Not Supported 00:19:44.367 SGL Bit Bucket Descriptor: Not Supported 00:19:44.367 SGL Metadata Pointer: Not Supported 00:19:44.367 Oversized SGL: Not Supported 00:19:44.367 SGL Metadata Address: Not Supported 00:19:44.367 SGL Offset: Supported 00:19:44.367 Transport SGL Data Block: Not Supported 00:19:44.367 Replay Protected Memory Block: Not Supported 00:19:44.367 00:19:44.367 Firmware Slot Information 00:19:44.367 ========================= 00:19:44.367 Active slot: 0 00:19:44.367 00:19:44.367 00:19:44.367 Error Log 00:19:44.367 ========= 00:19:44.367 00:19:44.367 Active Namespaces 00:19:44.367 ================= 00:19:44.367 Discovery Log Page 00:19:44.367 ================== 00:19:44.367 Generation Counter: 2 00:19:44.367 Number of Records: 2 00:19:44.367 Record Format: 0 00:19:44.367 00:19:44.367 Discovery Log Entry 0 00:19:44.367 ---------------------- 00:19:44.367 Transport Type: 3 (TCP) 00:19:44.367 Address Family: 1 (IPv4) 00:19:44.367 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:44.367 Entry Flags: 00:19:44.367 Duplicate Returned Information: 0 00:19:44.367 Explicit Persistent Connection Support for Discovery: 0 00:19:44.367 Transport Requirements: 00:19:44.367 Secure Channel: Not Specified 00:19:44.367 Port ID: 1 (0x0001) 00:19:44.367 Controller ID: 65535 (0xffff) 00:19:44.367 Admin Max SQ Size: 32 00:19:44.367 Transport Service Identifier: 4420 00:19:44.367 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:44.367 Transport Address: 10.0.0.1 00:19:44.367 Discovery Log Entry 1 00:19:44.367 ---------------------- 00:19:44.367 Transport Type: 3 (TCP) 00:19:44.367 Address Family: 1 (IPv4) 00:19:44.367 Subsystem Type: 2 (NVM Subsystem) 00:19:44.367 Entry Flags: 00:19:44.367 Duplicate Returned Information: 0 00:19:44.367 Explicit Persistent Connection Support for Discovery: 0 00:19:44.367 Transport Requirements: 00:19:44.367 Secure Channel: Not Specified 00:19:44.367 Port ID: 1 (0x0001) 00:19:44.367 Controller ID: 65535 (0xffff) 00:19:44.367 Admin Max SQ Size: 32 00:19:44.367 Transport Service Identifier: 4420 00:19:44.367 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:44.367 Transport Address: 10.0.0.1 00:19:44.367 19:50:25 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:44.367 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.367 get_feature(0x01) failed 00:19:44.367 get_feature(0x02) failed 00:19:44.367 get_feature(0x04) failed 00:19:44.367 ===================================================== 00:19:44.367 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:44.367 ===================================================== 00:19:44.367 Controller Capabilities/Features 00:19:44.367 ================================ 00:19:44.367 Vendor ID: 0000 00:19:44.367 Subsystem Vendor ID: 0000 00:19:44.367 Serial Number: 4c080ccb4b49fa1ccd77 00:19:44.367 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:44.367 Firmware Version: 6.7.0-68 00:19:44.367 Recommended Arb Burst: 6 00:19:44.367 IEEE OUI Identifier: 00 00 00 00:19:44.367 Multi-path I/O 00:19:44.367 May have multiple subsystem ports: Yes 00:19:44.367 May have multiple controllers: Yes 00:19:44.367 Associated with SR-IOV VF: No 00:19:44.367 Max Data Transfer Size: Unlimited 00:19:44.367 Max Number of Namespaces: 1024 00:19:44.367 Max Number of I/O Queues: 128 00:19:44.367 NVMe Specification Version (VS): 1.3 00:19:44.367 NVMe Specification Version (Identify): 1.3 00:19:44.367 Maximum Queue Entries: 1024 00:19:44.367 Contiguous Queues Required: No 00:19:44.367 Arbitration Mechanisms Supported 00:19:44.367 Weighted Round Robin: Not Supported 00:19:44.367 Vendor Specific: Not Supported 00:19:44.367 Reset Timeout: 7500 ms 00:19:44.367 Doorbell Stride: 4 bytes 00:19:44.367 NVM Subsystem Reset: Not Supported 00:19:44.367 Command Sets Supported 00:19:44.367 NVM Command Set: Supported 00:19:44.367 Boot Partition: Not Supported 00:19:44.367 Memory Page Size Minimum: 4096 bytes 00:19:44.367 Memory Page Size Maximum: 4096 bytes 00:19:44.367 Persistent Memory Region: Not Supported 00:19:44.367 Optional Asynchronous Events Supported 00:19:44.367 Namespace Attribute Notices: Supported 00:19:44.367 Firmware Activation Notices: Not Supported 00:19:44.367 ANA Change Notices: Supported 00:19:44.367 PLE Aggregate Log Change Notices: Not Supported 00:19:44.367 LBA Status Info Alert Notices: Not Supported 00:19:44.367 EGE Aggregate Log Change Notices: Not Supported 00:19:44.367 Normal NVM Subsystem Shutdown event: Not Supported 00:19:44.367 Zone Descriptor Change Notices: Not Supported 00:19:44.367 Discovery Log Change Notices: Not Supported 00:19:44.367 Controller Attributes 00:19:44.367 128-bit Host Identifier: Supported 00:19:44.367 Non-Operational Permissive Mode: Not Supported 00:19:44.367 NVM Sets: Not Supported 00:19:44.367 Read Recovery Levels: Not Supported 00:19:44.367 Endurance Groups: Not Supported 00:19:44.367 Predictable Latency Mode: Not Supported 00:19:44.367 Traffic Based Keep ALive: Supported 00:19:44.367 Namespace Granularity: Not Supported 00:19:44.367 SQ Associations: Not Supported 00:19:44.367 UUID List: Not Supported 00:19:44.367 Multi-Domain Subsystem: Not Supported 00:19:44.367 Fixed Capacity Management: Not Supported 00:19:44.367 Variable Capacity Management: Not Supported 00:19:44.367 Delete Endurance Group: Not Supported 00:19:44.367 Delete NVM Set: Not Supported 00:19:44.367 Extended LBA Formats Supported: Not Supported 00:19:44.367 Flexible Data Placement Supported: Not Supported 00:19:44.367 00:19:44.367 Controller Memory Buffer Support 00:19:44.367 ================================ 00:19:44.367 Supported: No 00:19:44.367 00:19:44.367 Persistent Memory Region Support 00:19:44.367 ================================ 00:19:44.367 Supported: No 00:19:44.367 00:19:44.367 Admin Command Set Attributes 00:19:44.367 ============================ 00:19:44.367 Security Send/Receive: Not Supported 00:19:44.367 Format NVM: Not Supported 00:19:44.367 Firmware Activate/Download: Not Supported 00:19:44.367 Namespace Management: Not Supported 00:19:44.367 Device Self-Test: Not Supported 00:19:44.367 Directives: Not Supported 00:19:44.367 NVMe-MI: Not Supported 00:19:44.367 Virtualization Management: Not Supported 00:19:44.367 Doorbell Buffer Config: Not Supported 00:19:44.368 Get LBA Status Capability: Not Supported 00:19:44.368 Command & Feature Lockdown Capability: Not Supported 00:19:44.368 Abort Command Limit: 4 00:19:44.368 Async Event Request Limit: 4 00:19:44.368 Number of Firmware Slots: N/A 00:19:44.368 Firmware Slot 1 Read-Only: N/A 00:19:44.368 Firmware Activation Without Reset: N/A 00:19:44.368 Multiple Update Detection Support: N/A 00:19:44.368 Firmware Update Granularity: No Information Provided 00:19:44.368 Per-Namespace SMART Log: Yes 00:19:44.368 Asymmetric Namespace Access Log Page: Supported 00:19:44.368 ANA Transition Time : 10 sec 00:19:44.368 00:19:44.368 Asymmetric Namespace Access Capabilities 00:19:44.368 ANA Optimized State : Supported 00:19:44.368 ANA Non-Optimized State : Supported 00:19:44.368 ANA Inaccessible State : Supported 00:19:44.368 ANA Persistent Loss State : Supported 00:19:44.368 ANA Change State : Supported 00:19:44.368 ANAGRPID is not changed : No 00:19:44.368 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:44.368 00:19:44.368 ANA Group Identifier Maximum : 128 00:19:44.368 Number of ANA Group Identifiers : 128 00:19:44.368 Max Number of Allowed Namespaces : 1024 00:19:44.368 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:44.368 Command Effects Log Page: Supported 00:19:44.368 Get Log Page Extended Data: Supported 00:19:44.368 Telemetry Log Pages: Not Supported 00:19:44.368 Persistent Event Log Pages: Not Supported 00:19:44.368 Supported Log Pages Log Page: May Support 00:19:44.368 Commands Supported & Effects Log Page: Not Supported 00:19:44.368 Feature Identifiers & Effects Log Page:May Support 00:19:44.368 NVMe-MI Commands & Effects Log Page: May Support 00:19:44.368 Data Area 4 for Telemetry Log: Not Supported 00:19:44.368 Error Log Page Entries Supported: 128 00:19:44.368 Keep Alive: Supported 00:19:44.368 Keep Alive Granularity: 1000 ms 00:19:44.368 00:19:44.368 NVM Command Set Attributes 00:19:44.368 ========================== 00:19:44.368 Submission Queue Entry Size 00:19:44.368 Max: 64 00:19:44.368 Min: 64 00:19:44.368 Completion Queue Entry Size 00:19:44.368 Max: 16 00:19:44.368 Min: 16 00:19:44.368 Number of Namespaces: 1024 00:19:44.368 Compare Command: Not Supported 00:19:44.368 Write Uncorrectable Command: Not Supported 00:19:44.368 Dataset Management Command: Supported 00:19:44.368 Write Zeroes Command: Supported 00:19:44.368 Set Features Save Field: Not Supported 00:19:44.368 Reservations: Not Supported 00:19:44.368 Timestamp: Not Supported 00:19:44.368 Copy: Not Supported 00:19:44.368 Volatile Write Cache: Present 00:19:44.368 Atomic Write Unit (Normal): 1 00:19:44.368 Atomic Write Unit (PFail): 1 00:19:44.368 Atomic Compare & Write Unit: 1 00:19:44.368 Fused Compare & Write: Not Supported 00:19:44.368 Scatter-Gather List 00:19:44.368 SGL Command Set: Supported 00:19:44.368 SGL Keyed: Not Supported 00:19:44.368 SGL Bit Bucket Descriptor: Not Supported 00:19:44.368 SGL Metadata Pointer: Not Supported 00:19:44.368 Oversized SGL: Not Supported 00:19:44.368 SGL Metadata Address: Not Supported 00:19:44.368 SGL Offset: Supported 00:19:44.368 Transport SGL Data Block: Not Supported 00:19:44.368 Replay Protected Memory Block: Not Supported 00:19:44.368 00:19:44.368 Firmware Slot Information 00:19:44.368 ========================= 00:19:44.368 Active slot: 0 00:19:44.368 00:19:44.368 Asymmetric Namespace Access 00:19:44.368 =========================== 00:19:44.368 Change Count : 0 00:19:44.368 Number of ANA Group Descriptors : 1 00:19:44.368 ANA Group Descriptor : 0 00:19:44.368 ANA Group ID : 1 00:19:44.368 Number of NSID Values : 1 00:19:44.368 Change Count : 0 00:19:44.368 ANA State : 1 00:19:44.368 Namespace Identifier : 1 00:19:44.368 00:19:44.368 Commands Supported and Effects 00:19:44.368 ============================== 00:19:44.368 Admin Commands 00:19:44.368 -------------- 00:19:44.368 Get Log Page (02h): Supported 00:19:44.368 Identify (06h): Supported 00:19:44.368 Abort (08h): Supported 00:19:44.368 Set Features (09h): Supported 00:19:44.368 Get Features (0Ah): Supported 00:19:44.368 Asynchronous Event Request (0Ch): Supported 00:19:44.368 Keep Alive (18h): Supported 00:19:44.368 I/O Commands 00:19:44.368 ------------ 00:19:44.368 Flush (00h): Supported 00:19:44.368 Write (01h): Supported LBA-Change 00:19:44.368 Read (02h): Supported 00:19:44.368 Write Zeroes (08h): Supported LBA-Change 00:19:44.368 Dataset Management (09h): Supported 00:19:44.368 00:19:44.368 Error Log 00:19:44.368 ========= 00:19:44.368 Entry: 0 00:19:44.368 Error Count: 0x3 00:19:44.368 Submission Queue Id: 0x0 00:19:44.368 Command Id: 0x5 00:19:44.368 Phase Bit: 0 00:19:44.368 Status Code: 0x2 00:19:44.368 Status Code Type: 0x0 00:19:44.368 Do Not Retry: 1 00:19:44.368 Error Location: 0x28 00:19:44.368 LBA: 0x0 00:19:44.368 Namespace: 0x0 00:19:44.368 Vendor Log Page: 0x0 00:19:44.368 ----------- 00:19:44.368 Entry: 1 00:19:44.368 Error Count: 0x2 00:19:44.368 Submission Queue Id: 0x0 00:19:44.368 Command Id: 0x5 00:19:44.368 Phase Bit: 0 00:19:44.368 Status Code: 0x2 00:19:44.368 Status Code Type: 0x0 00:19:44.368 Do Not Retry: 1 00:19:44.368 Error Location: 0x28 00:19:44.368 LBA: 0x0 00:19:44.368 Namespace: 0x0 00:19:44.368 Vendor Log Page: 0x0 00:19:44.368 ----------- 00:19:44.368 Entry: 2 00:19:44.368 Error Count: 0x1 00:19:44.368 Submission Queue Id: 0x0 00:19:44.368 Command Id: 0x4 00:19:44.368 Phase Bit: 0 00:19:44.368 Status Code: 0x2 00:19:44.368 Status Code Type: 0x0 00:19:44.368 Do Not Retry: 1 00:19:44.368 Error Location: 0x28 00:19:44.368 LBA: 0x0 00:19:44.368 Namespace: 0x0 00:19:44.368 Vendor Log Page: 0x0 00:19:44.368 00:19:44.368 Number of Queues 00:19:44.368 ================ 00:19:44.368 Number of I/O Submission Queues: 128 00:19:44.368 Number of I/O Completion Queues: 128 00:19:44.368 00:19:44.368 ZNS Specific Controller Data 00:19:44.368 ============================ 00:19:44.368 Zone Append Size Limit: 0 00:19:44.368 00:19:44.368 00:19:44.368 Active Namespaces 00:19:44.368 ================= 00:19:44.368 get_feature(0x05) failed 00:19:44.368 Namespace ID:1 00:19:44.368 Command Set Identifier: NVM (00h) 00:19:44.368 Deallocate: Supported 00:19:44.368 Deallocated/Unwritten Error: Not Supported 00:19:44.368 Deallocated Read Value: Unknown 00:19:44.368 Deallocate in Write Zeroes: Not Supported 00:19:44.368 Deallocated Guard Field: 0xFFFF 00:19:44.368 Flush: Supported 00:19:44.368 Reservation: Not Supported 00:19:44.368 Namespace Sharing Capabilities: Multiple Controllers 00:19:44.368 Size (in LBAs): 1953525168 (931GiB) 00:19:44.368 Capacity (in LBAs): 1953525168 (931GiB) 00:19:44.368 Utilization (in LBAs): 1953525168 (931GiB) 00:19:44.368 UUID: e2d16ba2-76ec-4b8c-9221-b13973ca4c98 00:19:44.368 Thin Provisioning: Not Supported 00:19:44.368 Per-NS Atomic Units: Yes 00:19:44.368 Atomic Boundary Size (Normal): 0 00:19:44.368 Atomic Boundary Size (PFail): 0 00:19:44.368 Atomic Boundary Offset: 0 00:19:44.368 NGUID/EUI64 Never Reused: No 00:19:44.368 ANA group ID: 1 00:19:44.368 Namespace Write Protected: No 00:19:44.368 Number of LBA Formats: 1 00:19:44.368 Current LBA Format: LBA Format #00 00:19:44.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:44.368 00:19:44.368 19:50:25 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:44.368 19:50:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:44.368 19:50:25 -- nvmf/common.sh@117 -- # sync 00:19:44.368 19:50:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:44.368 19:50:25 -- nvmf/common.sh@120 -- # set +e 00:19:44.368 19:50:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:44.368 19:50:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:44.368 rmmod nvme_tcp 00:19:44.627 rmmod nvme_fabrics 00:19:44.627 19:50:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:44.627 19:50:25 -- nvmf/common.sh@124 -- # set -e 00:19:44.627 19:50:25 -- nvmf/common.sh@125 -- # return 0 00:19:44.627 19:50:25 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:44.627 19:50:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:44.627 19:50:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:44.627 19:50:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:44.627 19:50:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.627 19:50:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.627 19:50:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.627 19:50:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.627 19:50:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.531 19:50:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:46.531 19:50:27 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:46.531 19:50:27 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:46.531 19:50:27 -- nvmf/common.sh@675 -- # echo 0 00:19:46.531 19:50:27 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:46.531 19:50:27 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:46.531 19:50:27 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:46.531 19:50:27 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:46.531 19:50:27 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:46.531 19:50:27 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:46.531 19:50:27 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:47.908 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:47.908 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:47.908 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:47.908 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:47.908 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:47.908 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:47.908 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:47.908 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:47.908 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:19:47.908 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:19:47.908 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:19:47.908 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:19:47.908 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:19:47.908 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:19:47.908 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:19:47.908 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:19:48.847 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:19:48.847 00:19:48.847 real 0m9.238s 00:19:48.847 user 0m1.915s 00:19:48.847 sys 0m3.337s 00:19:48.847 19:50:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:48.847 19:50:30 -- common/autotest_common.sh@10 -- # set +x 00:19:48.847 ************************************ 00:19:48.847 END TEST nvmf_identify_kernel_target 00:19:48.847 ************************************ 00:19:48.847 19:50:30 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:48.847 19:50:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:48.847 19:50:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:48.847 19:50:30 -- common/autotest_common.sh@10 -- # set +x 00:19:49.105 ************************************ 00:19:49.105 START TEST nvmf_auth 00:19:49.105 ************************************ 00:19:49.105 19:50:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:49.105 * Looking for test storage... 00:19:49.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:49.105 19:50:30 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.105 19:50:30 -- nvmf/common.sh@7 -- # uname -s 00:19:49.105 19:50:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.105 19:50:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.105 19:50:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.105 19:50:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.105 19:50:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.105 19:50:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.105 19:50:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.105 19:50:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.105 19:50:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.105 19:50:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.105 19:50:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.105 19:50:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.105 19:50:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.105 19:50:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.105 19:50:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.105 19:50:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.105 19:50:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.105 19:50:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.105 19:50:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.105 19:50:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.105 19:50:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.105 19:50:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.105 19:50:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.105 19:50:30 -- paths/export.sh@5 -- # export PATH 00:19:49.105 19:50:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.105 19:50:30 -- nvmf/common.sh@47 -- # : 0 00:19:49.105 19:50:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.105 19:50:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.105 19:50:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.105 19:50:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.105 19:50:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.105 19:50:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.105 19:50:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.105 19:50:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.105 19:50:30 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:49.105 19:50:30 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:49.105 19:50:30 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:49.105 19:50:30 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:49.105 19:50:30 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:49.105 19:50:30 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:49.105 19:50:30 -- host/auth.sh@21 -- # keys=() 00:19:49.105 19:50:30 -- host/auth.sh@77 -- # nvmftestinit 00:19:49.105 19:50:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:49.105 19:50:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.105 19:50:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:49.105 19:50:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:49.106 19:50:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:49.106 19:50:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.106 19:50:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.106 19:50:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.106 19:50:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:49.106 19:50:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:49.106 19:50:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:49.106 19:50:30 -- common/autotest_common.sh@10 -- # set +x 00:19:51.009 19:50:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:51.009 19:50:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:51.009 19:50:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:51.009 19:50:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:51.009 19:50:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:51.010 19:50:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:51.010 19:50:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:51.010 19:50:32 -- nvmf/common.sh@295 -- # net_devs=() 00:19:51.010 19:50:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:51.010 19:50:32 -- nvmf/common.sh@296 -- # e810=() 00:19:51.010 19:50:32 -- nvmf/common.sh@296 -- # local -ga e810 00:19:51.010 19:50:32 -- nvmf/common.sh@297 -- # x722=() 00:19:51.010 19:50:32 -- nvmf/common.sh@297 -- # local -ga x722 00:19:51.010 19:50:32 -- nvmf/common.sh@298 -- # mlx=() 00:19:51.010 19:50:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:51.010 19:50:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.010 19:50:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:51.010 19:50:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:51.010 19:50:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:51.010 19:50:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.010 19:50:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:51.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:51.010 19:50:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:51.010 19:50:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:51.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:51.010 19:50:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:51.010 19:50:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.010 19:50:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.010 19:50:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:51.010 19:50:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.010 19:50:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:51.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:51.010 19:50:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.010 19:50:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:51.010 19:50:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.010 19:50:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:51.010 19:50:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.010 19:50:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:51.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:51.010 19:50:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.010 19:50:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:51.010 19:50:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:51.010 19:50:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:51.010 19:50:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:51.010 19:50:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.010 19:50:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.010 19:50:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.010 19:50:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:51.010 19:50:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.010 19:50:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.010 19:50:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:51.010 19:50:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.010 19:50:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.010 19:50:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:51.010 19:50:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:51.010 19:50:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.010 19:50:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.010 19:50:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.010 19:50:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.010 19:50:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:51.010 19:50:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.010 19:50:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.010 19:50:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.010 19:50:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:51.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:19:51.010 00:19:51.010 --- 10.0.0.2 ping statistics --- 00:19:51.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.010 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:19:51.010 19:50:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:19:51.010 00:19:51.010 --- 10.0.0.1 ping statistics --- 00:19:51.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.010 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:19:51.268 19:50:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.268 19:50:32 -- nvmf/common.sh@411 -- # return 0 00:19:51.268 19:50:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:51.268 19:50:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.268 19:50:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:51.268 19:50:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:51.268 19:50:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.268 19:50:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:51.268 19:50:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:51.268 19:50:32 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:19:51.268 19:50:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:51.268 19:50:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:51.268 19:50:32 -- common/autotest_common.sh@10 -- # set +x 00:19:51.268 19:50:32 -- nvmf/common.sh@470 -- # nvmfpid=1759577 00:19:51.268 19:50:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:51.268 19:50:32 -- nvmf/common.sh@471 -- # waitforlisten 1759577 00:19:51.268 19:50:32 -- common/autotest_common.sh@817 -- # '[' -z 1759577 ']' 00:19:51.268 19:50:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.268 19:50:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:51.268 19:50:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.268 19:50:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:51.268 19:50:32 -- common/autotest_common.sh@10 -- # set +x 00:19:51.526 19:50:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:51.526 19:50:32 -- common/autotest_common.sh@850 -- # return 0 00:19:51.526 19:50:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:51.526 19:50:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:51.526 19:50:32 -- common/autotest_common.sh@10 -- # set +x 00:19:51.526 19:50:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.526 19:50:32 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:51.526 19:50:32 -- host/auth.sh@81 -- # gen_key null 32 00:19:51.526 19:50:32 -- host/auth.sh@53 -- # local digest len file key 00:19:51.526 19:50:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.526 19:50:32 -- host/auth.sh@54 -- # local -A digests 00:19:51.526 19:50:32 -- host/auth.sh@56 -- # digest=null 00:19:51.526 19:50:32 -- host/auth.sh@56 -- # len=32 00:19:51.526 19:50:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:51.526 19:50:32 -- host/auth.sh@57 -- # key=10552d4f24306389f0591e8188204f2c 00:19:51.526 19:50:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:51.526 19:50:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.Cpy 00:19:51.526 19:50:32 -- host/auth.sh@59 -- # format_dhchap_key 10552d4f24306389f0591e8188204f2c 0 00:19:51.526 19:50:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 10552d4f24306389f0591e8188204f2c 0 00:19:51.526 19:50:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:51.526 19:50:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:51.526 19:50:32 -- nvmf/common.sh@693 -- # key=10552d4f24306389f0591e8188204f2c 00:19:51.526 19:50:32 -- nvmf/common.sh@693 -- # digest=0 00:19:51.526 19:50:32 -- nvmf/common.sh@694 -- # python - 00:19:51.526 19:50:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.Cpy 00:19:51.526 19:50:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.Cpy 00:19:51.526 19:50:32 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.Cpy 00:19:51.526 19:50:32 -- host/auth.sh@82 -- # gen_key null 48 00:19:51.527 19:50:32 -- host/auth.sh@53 -- # local digest len file key 00:19:51.527 19:50:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.527 19:50:32 -- host/auth.sh@54 -- # local -A digests 00:19:51.527 19:50:32 -- host/auth.sh@56 -- # digest=null 00:19:51.527 19:50:32 -- host/auth.sh@56 -- # len=48 00:19:51.527 19:50:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:51.527 19:50:32 -- host/auth.sh@57 -- # key=7c5cceca7592bae2e8d42b42cf6d9515c9ce2e2470b22a95 00:19:51.527 19:50:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:19:51.527 19:50:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.7dl 00:19:51.527 19:50:32 -- host/auth.sh@59 -- # format_dhchap_key 7c5cceca7592bae2e8d42b42cf6d9515c9ce2e2470b22a95 0 00:19:51.527 19:50:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 7c5cceca7592bae2e8d42b42cf6d9515c9ce2e2470b22a95 0 00:19:51.527 19:50:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:51.527 19:50:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:51.527 19:50:32 -- nvmf/common.sh@693 -- # key=7c5cceca7592bae2e8d42b42cf6d9515c9ce2e2470b22a95 00:19:51.527 19:50:32 -- nvmf/common.sh@693 -- # digest=0 00:19:51.527 19:50:32 -- nvmf/common.sh@694 -- # python - 00:19:51.527 19:50:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.7dl 00:19:51.527 19:50:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.7dl 00:19:51.527 19:50:32 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.7dl 00:19:51.527 19:50:32 -- host/auth.sh@83 -- # gen_key sha256 32 00:19:51.527 19:50:32 -- host/auth.sh@53 -- # local digest len file key 00:19:51.527 19:50:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.527 19:50:32 -- host/auth.sh@54 -- # local -A digests 00:19:51.527 19:50:32 -- host/auth.sh@56 -- # digest=sha256 00:19:51.527 19:50:32 -- host/auth.sh@56 -- # len=32 00:19:51.527 19:50:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:51.527 19:50:32 -- host/auth.sh@57 -- # key=d37096ff04f4491958ba61b4cd422aec 00:19:51.527 19:50:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:19:51.527 19:50:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.ZNa 00:19:51.527 19:50:32 -- host/auth.sh@59 -- # format_dhchap_key d37096ff04f4491958ba61b4cd422aec 1 00:19:51.527 19:50:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 d37096ff04f4491958ba61b4cd422aec 1 00:19:51.527 19:50:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:51.527 19:50:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:51.527 19:50:32 -- nvmf/common.sh@693 -- # key=d37096ff04f4491958ba61b4cd422aec 00:19:51.527 19:50:32 -- nvmf/common.sh@693 -- # digest=1 00:19:51.527 19:50:32 -- nvmf/common.sh@694 -- # python - 00:19:51.527 19:50:33 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.ZNa 00:19:51.527 19:50:33 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.ZNa 00:19:51.527 19:50:33 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.ZNa 00:19:51.527 19:50:33 -- host/auth.sh@84 -- # gen_key sha384 48 00:19:51.527 19:50:33 -- host/auth.sh@53 -- # local digest len file key 00:19:51.527 19:50:33 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.527 19:50:33 -- host/auth.sh@54 -- # local -A digests 00:19:51.527 19:50:33 -- host/auth.sh@56 -- # digest=sha384 00:19:51.527 19:50:33 -- host/auth.sh@56 -- # len=48 00:19:51.527 19:50:33 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:51.527 19:50:33 -- host/auth.sh@57 -- # key=43fe77f22e5d2b824b4041012f2d28ddfb78d016871f1a4a 00:19:51.527 19:50:33 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:19:51.527 19:50:33 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.hOC 00:19:51.527 19:50:33 -- host/auth.sh@59 -- # format_dhchap_key 43fe77f22e5d2b824b4041012f2d28ddfb78d016871f1a4a 2 00:19:51.527 19:50:33 -- nvmf/common.sh@708 -- # format_key DHHC-1 43fe77f22e5d2b824b4041012f2d28ddfb78d016871f1a4a 2 00:19:51.527 19:50:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:51.527 19:50:33 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:51.527 19:50:33 -- nvmf/common.sh@693 -- # key=43fe77f22e5d2b824b4041012f2d28ddfb78d016871f1a4a 00:19:51.527 19:50:33 -- nvmf/common.sh@693 -- # digest=2 00:19:51.527 19:50:33 -- nvmf/common.sh@694 -- # python - 00:19:51.784 19:50:33 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.hOC 00:19:51.784 19:50:33 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.hOC 00:19:51.784 19:50:33 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.hOC 00:19:51.784 19:50:33 -- host/auth.sh@85 -- # gen_key sha512 64 00:19:51.784 19:50:33 -- host/auth.sh@53 -- # local digest len file key 00:19:51.784 19:50:33 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.784 19:50:33 -- host/auth.sh@54 -- # local -A digests 00:19:51.784 19:50:33 -- host/auth.sh@56 -- # digest=sha512 00:19:51.784 19:50:33 -- host/auth.sh@56 -- # len=64 00:19:51.784 19:50:33 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:51.784 19:50:33 -- host/auth.sh@57 -- # key=ab43d015ce8e36eb7617a18b06ff3b7a6c3cd08df6051b944e5bd94ab1f3dbf7 00:19:51.784 19:50:33 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:19:51.784 19:50:33 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.5LO 00:19:51.784 19:50:33 -- host/auth.sh@59 -- # format_dhchap_key ab43d015ce8e36eb7617a18b06ff3b7a6c3cd08df6051b944e5bd94ab1f3dbf7 3 00:19:51.784 19:50:33 -- nvmf/common.sh@708 -- # format_key DHHC-1 ab43d015ce8e36eb7617a18b06ff3b7a6c3cd08df6051b944e5bd94ab1f3dbf7 3 00:19:51.784 19:50:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:51.784 19:50:33 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:19:51.784 19:50:33 -- nvmf/common.sh@693 -- # key=ab43d015ce8e36eb7617a18b06ff3b7a6c3cd08df6051b944e5bd94ab1f3dbf7 00:19:51.784 19:50:33 -- nvmf/common.sh@693 -- # digest=3 00:19:51.784 19:50:33 -- nvmf/common.sh@694 -- # python - 00:19:51.784 19:50:33 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.5LO 00:19:51.784 19:50:33 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.5LO 00:19:51.784 19:50:33 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.5LO 00:19:51.784 19:50:33 -- host/auth.sh@87 -- # waitforlisten 1759577 00:19:51.784 19:50:33 -- common/autotest_common.sh@817 -- # '[' -z 1759577 ']' 00:19:51.784 19:50:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.784 19:50:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:51.784 19:50:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.784 19:50:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:51.784 19:50:33 -- common/autotest_common.sh@10 -- # set +x 00:19:52.052 19:50:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:52.052 19:50:33 -- common/autotest_common.sh@850 -- # return 0 00:19:52.052 19:50:33 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:52.052 19:50:33 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Cpy 00:19:52.052 19:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.052 19:50:33 -- common/autotest_common.sh@10 -- # set +x 00:19:52.052 19:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.052 19:50:33 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:52.052 19:50:33 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7dl 00:19:52.052 19:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.052 19:50:33 -- common/autotest_common.sh@10 -- # set +x 00:19:52.052 19:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.052 19:50:33 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:52.052 19:50:33 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZNa 00:19:52.052 19:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.052 19:50:33 -- common/autotest_common.sh@10 -- # set +x 00:19:52.052 19:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.052 19:50:33 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:52.052 19:50:33 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.hOC 00:19:52.052 19:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.052 19:50:33 -- common/autotest_common.sh@10 -- # set +x 00:19:52.052 19:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.052 19:50:33 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:19:52.052 19:50:33 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.5LO 00:19:52.052 19:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:52.052 19:50:33 -- common/autotest_common.sh@10 -- # set +x 00:19:52.052 19:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:52.052 19:50:33 -- host/auth.sh@92 -- # nvmet_auth_init 00:19:52.052 19:50:33 -- host/auth.sh@35 -- # get_main_ns_ip 00:19:52.052 19:50:33 -- nvmf/common.sh@717 -- # local ip 00:19:52.052 19:50:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:52.052 19:50:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:52.052 19:50:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.052 19:50:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.052 19:50:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:52.052 19:50:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.052 19:50:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:52.052 19:50:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:52.052 19:50:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:52.052 19:50:33 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:52.052 19:50:33 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:52.052 19:50:33 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:19:52.052 19:50:33 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:52.052 19:50:33 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:52.052 19:50:33 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:52.052 19:50:33 -- nvmf/common.sh@628 -- # local block nvme 00:19:52.052 19:50:33 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:19:52.052 19:50:33 -- nvmf/common.sh@631 -- # modprobe nvmet 00:19:52.052 19:50:33 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:52.052 19:50:33 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:19:52.987 Waiting for block devices as requested 00:19:53.244 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:19:53.244 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:53.503 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:53.503 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:53.503 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:53.763 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:53.763 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:53.763 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:53.763 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:54.023 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:19:54.023 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:19:54.024 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:19:54.283 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:19:54.283 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:19:54.283 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:19:54.283 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:19:54.541 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:19:54.799 19:50:36 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:19:54.799 19:50:36 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:54.799 19:50:36 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:19:54.799 19:50:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:54.799 19:50:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:54.799 19:50:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:54.799 19:50:36 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:19:54.799 19:50:36 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:54.799 19:50:36 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:54.799 No valid GPT data, bailing 00:19:54.799 19:50:36 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:54.799 19:50:36 -- scripts/common.sh@391 -- # pt= 00:19:54.799 19:50:36 -- scripts/common.sh@392 -- # return 1 00:19:54.799 19:50:36 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:19:54.799 19:50:36 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:19:54.799 19:50:36 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:54.799 19:50:36 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:54.799 19:50:36 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:54.799 19:50:36 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:54.799 19:50:36 -- nvmf/common.sh@656 -- # echo 1 00:19:54.799 19:50:36 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:19:54.799 19:50:36 -- nvmf/common.sh@658 -- # echo 1 00:19:54.799 19:50:36 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:19:54.799 19:50:36 -- nvmf/common.sh@661 -- # echo tcp 00:19:54.799 19:50:36 -- nvmf/common.sh@662 -- # echo 4420 00:19:54.799 19:50:36 -- nvmf/common.sh@663 -- # echo ipv4 00:19:54.799 19:50:36 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:54.799 19:50:36 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:19:55.058 00:19:55.058 Discovery Log Number of Records 2, Generation counter 2 00:19:55.058 =====Discovery Log Entry 0====== 00:19:55.058 trtype: tcp 00:19:55.058 adrfam: ipv4 00:19:55.058 subtype: current discovery subsystem 00:19:55.058 treq: not specified, sq flow control disable supported 00:19:55.058 portid: 1 00:19:55.058 trsvcid: 4420 00:19:55.058 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:55.058 traddr: 10.0.0.1 00:19:55.058 eflags: none 00:19:55.058 sectype: none 00:19:55.058 =====Discovery Log Entry 1====== 00:19:55.058 trtype: tcp 00:19:55.058 adrfam: ipv4 00:19:55.058 subtype: nvme subsystem 00:19:55.058 treq: not specified, sq flow control disable supported 00:19:55.058 portid: 1 00:19:55.058 trsvcid: 4420 00:19:55.058 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:55.058 traddr: 10.0.0.1 00:19:55.058 eflags: none 00:19:55.058 sectype: none 00:19:55.058 19:50:36 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:55.058 19:50:36 -- host/auth.sh@37 -- # echo 0 00:19:55.058 19:50:36 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:55.058 19:50:36 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:55.058 19:50:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:55.058 19:50:36 -- host/auth.sh@44 -- # digest=sha256 00:19:55.058 19:50:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.058 19:50:36 -- host/auth.sh@44 -- # keyid=1 00:19:55.058 19:50:36 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:19:55.058 19:50:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:55.058 19:50:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:55.058 19:50:36 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:19:55.058 19:50:36 -- host/auth.sh@100 -- # IFS=, 00:19:55.058 19:50:36 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:19:55.058 19:50:36 -- host/auth.sh@100 -- # IFS=, 00:19:55.058 19:50:36 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.058 19:50:36 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:55.058 19:50:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:55.058 19:50:36 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:19:55.058 19:50:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.058 19:50:36 -- host/auth.sh@68 -- # keyid=1 00:19:55.058 19:50:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:55.058 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.058 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.058 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.058 19:50:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:55.058 19:50:36 -- nvmf/common.sh@717 -- # local ip 00:19:55.058 19:50:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:55.058 19:50:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:55.059 19:50:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.059 19:50:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.059 19:50:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:55.059 19:50:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.059 19:50:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:55.059 19:50:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:55.059 19:50:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:55.059 19:50:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:55.059 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.059 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.059 nvme0n1 00:19:55.059 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.059 19:50:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.059 19:50:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:55.059 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.059 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.059 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.059 19:50:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.059 19:50:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.059 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.059 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.059 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.059 19:50:36 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:55.059 19:50:36 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.059 19:50:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:55.059 19:50:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:55.059 19:50:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:55.059 19:50:36 -- host/auth.sh@44 -- # digest=sha256 00:19:55.059 19:50:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.059 19:50:36 -- host/auth.sh@44 -- # keyid=0 00:19:55.059 19:50:36 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:19:55.059 19:50:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:55.059 19:50:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:55.059 19:50:36 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:19:55.059 19:50:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:19:55.059 19:50:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:55.059 19:50:36 -- host/auth.sh@68 -- # digest=sha256 00:19:55.059 19:50:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:55.059 19:50:36 -- host/auth.sh@68 -- # keyid=0 00:19:55.059 19:50:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.059 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.059 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.319 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.319 19:50:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:55.319 19:50:36 -- nvmf/common.sh@717 -- # local ip 00:19:55.319 19:50:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:55.319 19:50:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:55.319 19:50:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.319 19:50:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.319 19:50:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:55.319 19:50:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.319 19:50:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:55.319 19:50:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:55.319 19:50:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:55.319 19:50:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:55.319 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.319 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.319 nvme0n1 00:19:55.319 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.319 19:50:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.319 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.319 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.319 19:50:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:55.319 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.319 19:50:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.319 19:50:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.319 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.319 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.319 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.319 19:50:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:55.319 19:50:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:55.319 19:50:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:55.319 19:50:36 -- host/auth.sh@44 -- # digest=sha256 00:19:55.319 19:50:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.319 19:50:36 -- host/auth.sh@44 -- # keyid=1 00:19:55.319 19:50:36 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:19:55.319 19:50:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:55.319 19:50:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:55.319 19:50:36 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:19:55.319 19:50:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:19:55.319 19:50:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:55.319 19:50:36 -- host/auth.sh@68 -- # digest=sha256 00:19:55.319 19:50:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:55.319 19:50:36 -- host/auth.sh@68 -- # keyid=1 00:19:55.319 19:50:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.319 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.319 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.319 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.319 19:50:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:55.319 19:50:36 -- nvmf/common.sh@717 -- # local ip 00:19:55.319 19:50:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:55.319 19:50:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:55.319 19:50:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.319 19:50:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.319 19:50:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:55.319 19:50:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.319 19:50:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:55.319 19:50:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:55.319 19:50:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:55.319 19:50:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:55.319 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.319 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.579 nvme0n1 00:19:55.579 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.579 19:50:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.579 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.579 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.579 19:50:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:55.579 19:50:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.579 19:50:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.579 19:50:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.579 19:50:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.579 19:50:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.579 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.579 19:50:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:55.579 19:50:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:55.579 19:50:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:55.579 19:50:37 -- host/auth.sh@44 -- # digest=sha256 00:19:55.579 19:50:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.579 19:50:37 -- host/auth.sh@44 -- # keyid=2 00:19:55.579 19:50:37 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:19:55.579 19:50:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:55.579 19:50:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:55.579 19:50:37 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:19:55.579 19:50:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:19:55.579 19:50:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:55.579 19:50:37 -- host/auth.sh@68 -- # digest=sha256 00:19:55.579 19:50:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:55.579 19:50:37 -- host/auth.sh@68 -- # keyid=2 00:19:55.579 19:50:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.579 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.579 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.579 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.579 19:50:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:55.579 19:50:37 -- nvmf/common.sh@717 -- # local ip 00:19:55.579 19:50:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:55.579 19:50:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:55.579 19:50:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.579 19:50:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.579 19:50:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:55.579 19:50:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.579 19:50:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:55.579 19:50:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:55.579 19:50:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:55.579 19:50:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:55.579 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.579 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.839 nvme0n1 00:19:55.839 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.839 19:50:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.839 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.839 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.839 19:50:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:55.839 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.839 19:50:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.839 19:50:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.839 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.839 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.839 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.839 19:50:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:55.839 19:50:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:55.839 19:50:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:55.839 19:50:37 -- host/auth.sh@44 -- # digest=sha256 00:19:55.839 19:50:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:55.839 19:50:37 -- host/auth.sh@44 -- # keyid=3 00:19:55.839 19:50:37 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:19:55.839 19:50:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:55.839 19:50:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:55.839 19:50:37 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:19:55.839 19:50:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:19:55.839 19:50:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:55.839 19:50:37 -- host/auth.sh@68 -- # digest=sha256 00:19:55.839 19:50:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:55.839 19:50:37 -- host/auth.sh@68 -- # keyid=3 00:19:55.839 19:50:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.839 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.839 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.839 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.839 19:50:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:55.839 19:50:37 -- nvmf/common.sh@717 -- # local ip 00:19:55.839 19:50:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:55.839 19:50:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:55.839 19:50:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.839 19:50:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.839 19:50:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:55.839 19:50:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.839 19:50:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:55.839 19:50:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:55.839 19:50:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:55.839 19:50:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:55.839 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.839 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.099 nvme0n1 00:19:56.099 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.099 19:50:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.099 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.099 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.099 19:50:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:56.099 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.099 19:50:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.099 19:50:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.099 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.099 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.099 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.099 19:50:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:56.099 19:50:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:56.099 19:50:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.099 19:50:37 -- host/auth.sh@44 -- # digest=sha256 00:19:56.099 19:50:37 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:56.099 19:50:37 -- host/auth.sh@44 -- # keyid=4 00:19:56.099 19:50:37 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:19:56.099 19:50:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:56.099 19:50:37 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:56.099 19:50:37 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:19:56.099 19:50:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:19:56.099 19:50:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.099 19:50:37 -- host/auth.sh@68 -- # digest=sha256 00:19:56.099 19:50:37 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:56.099 19:50:37 -- host/auth.sh@68 -- # keyid=4 00:19:56.099 19:50:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.099 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.099 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.099 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.099 19:50:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.099 19:50:37 -- nvmf/common.sh@717 -- # local ip 00:19:56.099 19:50:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.099 19:50:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.099 19:50:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.099 19:50:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.099 19:50:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:56.099 19:50:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.099 19:50:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:56.099 19:50:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:56.099 19:50:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:56.099 19:50:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:56.099 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.099 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.360 nvme0n1 00:19:56.360 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.360 19:50:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.360 19:50:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:56.360 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.360 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.360 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.360 19:50:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.360 19:50:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.360 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.360 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.360 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.360 19:50:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.360 19:50:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:56.360 19:50:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:56.360 19:50:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.360 19:50:37 -- host/auth.sh@44 -- # digest=sha256 00:19:56.360 19:50:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:56.360 19:50:37 -- host/auth.sh@44 -- # keyid=0 00:19:56.360 19:50:37 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:19:56.360 19:50:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:56.360 19:50:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:56.360 19:50:37 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:19:56.360 19:50:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:19:56.360 19:50:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.360 19:50:37 -- host/auth.sh@68 -- # digest=sha256 00:19:56.360 19:50:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:56.360 19:50:37 -- host/auth.sh@68 -- # keyid=0 00:19:56.360 19:50:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.360 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.360 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.360 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.360 19:50:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.360 19:50:37 -- nvmf/common.sh@717 -- # local ip 00:19:56.360 19:50:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.360 19:50:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.360 19:50:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.360 19:50:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.360 19:50:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:56.360 19:50:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.360 19:50:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:56.360 19:50:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:56.360 19:50:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:56.360 19:50:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:56.360 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.360 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.619 nvme0n1 00:19:56.619 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.619 19:50:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.619 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.619 19:50:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:56.619 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.619 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.619 19:50:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.619 19:50:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.619 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.619 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.619 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.619 19:50:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:56.619 19:50:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:56.619 19:50:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.619 19:50:37 -- host/auth.sh@44 -- # digest=sha256 00:19:56.619 19:50:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:56.619 19:50:37 -- host/auth.sh@44 -- # keyid=1 00:19:56.619 19:50:37 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:19:56.619 19:50:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:56.619 19:50:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:56.619 19:50:37 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:19:56.619 19:50:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:19:56.619 19:50:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.619 19:50:37 -- host/auth.sh@68 -- # digest=sha256 00:19:56.619 19:50:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:56.619 19:50:37 -- host/auth.sh@68 -- # keyid=1 00:19:56.619 19:50:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.619 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.619 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.619 19:50:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.619 19:50:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.619 19:50:37 -- nvmf/common.sh@717 -- # local ip 00:19:56.619 19:50:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.619 19:50:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.619 19:50:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.619 19:50:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.619 19:50:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:56.619 19:50:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.619 19:50:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:56.619 19:50:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:56.619 19:50:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:56.619 19:50:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:56.619 19:50:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.619 19:50:37 -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 nvme0n1 00:19:56.879 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.879 19:50:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.879 19:50:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:56.879 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.879 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.879 19:50:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.879 19:50:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.879 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.879 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.879 19:50:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:56.879 19:50:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:56.879 19:50:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:56.879 19:50:38 -- host/auth.sh@44 -- # digest=sha256 00:19:56.879 19:50:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:56.879 19:50:38 -- host/auth.sh@44 -- # keyid=2 00:19:56.879 19:50:38 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:19:56.879 19:50:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:56.879 19:50:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:56.879 19:50:38 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:19:56.879 19:50:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:19:56.879 19:50:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:56.879 19:50:38 -- host/auth.sh@68 -- # digest=sha256 00:19:56.879 19:50:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:56.879 19:50:38 -- host/auth.sh@68 -- # keyid=2 00:19:56.879 19:50:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:56.879 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.879 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:56.879 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.879 19:50:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:56.879 19:50:38 -- nvmf/common.sh@717 -- # local ip 00:19:56.879 19:50:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:56.879 19:50:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:56.879 19:50:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.879 19:50:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.879 19:50:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:56.879 19:50:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.879 19:50:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:56.879 19:50:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:56.879 19:50:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:56.879 19:50:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:56.879 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.879 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.139 nvme0n1 00:19:57.139 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.139 19:50:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.139 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.139 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.139 19:50:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.139 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.139 19:50:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.139 19:50:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.139 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.139 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.139 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.139 19:50:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.139 19:50:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:57.139 19:50:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.139 19:50:38 -- host/auth.sh@44 -- # digest=sha256 00:19:57.139 19:50:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:57.139 19:50:38 -- host/auth.sh@44 -- # keyid=3 00:19:57.139 19:50:38 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:19:57.139 19:50:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:57.139 19:50:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:57.139 19:50:38 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:19:57.139 19:50:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:19:57.139 19:50:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.139 19:50:38 -- host/auth.sh@68 -- # digest=sha256 00:19:57.139 19:50:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:57.139 19:50:38 -- host/auth.sh@68 -- # keyid=3 00:19:57.139 19:50:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.139 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.139 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.139 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.139 19:50:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.139 19:50:38 -- nvmf/common.sh@717 -- # local ip 00:19:57.139 19:50:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.139 19:50:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.139 19:50:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.139 19:50:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.139 19:50:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:57.139 19:50:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.139 19:50:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:57.139 19:50:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:57.139 19:50:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:57.139 19:50:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:57.139 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.139 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.398 nvme0n1 00:19:57.398 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.398 19:50:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.398 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.398 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.398 19:50:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.398 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.398 19:50:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.398 19:50:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.398 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.398 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.398 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.398 19:50:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.398 19:50:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:57.398 19:50:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.398 19:50:38 -- host/auth.sh@44 -- # digest=sha256 00:19:57.398 19:50:38 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:57.398 19:50:38 -- host/auth.sh@44 -- # keyid=4 00:19:57.398 19:50:38 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:19:57.398 19:50:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:57.398 19:50:38 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:57.398 19:50:38 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:19:57.398 19:50:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:19:57.398 19:50:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.398 19:50:38 -- host/auth.sh@68 -- # digest=sha256 00:19:57.398 19:50:38 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:57.398 19:50:38 -- host/auth.sh@68 -- # keyid=4 00:19:57.398 19:50:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.398 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.398 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.398 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.398 19:50:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.398 19:50:38 -- nvmf/common.sh@717 -- # local ip 00:19:57.398 19:50:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.398 19:50:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.398 19:50:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.398 19:50:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.398 19:50:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:57.398 19:50:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.398 19:50:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:57.398 19:50:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:57.398 19:50:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:57.398 19:50:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:57.398 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.398 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 nvme0n1 00:19:57.658 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.658 19:50:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.658 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.658 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 19:50:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.658 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.658 19:50:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.658 19:50:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.658 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.658 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.658 19:50:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.658 19:50:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.658 19:50:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:57.658 19:50:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.658 19:50:38 -- host/auth.sh@44 -- # digest=sha256 00:19:57.658 19:50:38 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:57.658 19:50:38 -- host/auth.sh@44 -- # keyid=0 00:19:57.658 19:50:38 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:19:57.658 19:50:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:57.658 19:50:38 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:57.658 19:50:38 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:19:57.658 19:50:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:19:57.658 19:50:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.658 19:50:38 -- host/auth.sh@68 -- # digest=sha256 00:19:57.658 19:50:38 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:57.658 19:50:38 -- host/auth.sh@68 -- # keyid=0 00:19:57.658 19:50:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.658 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.658 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 19:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.658 19:50:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.658 19:50:38 -- nvmf/common.sh@717 -- # local ip 00:19:57.658 19:50:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.658 19:50:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.658 19:50:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.658 19:50:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.658 19:50:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:57.658 19:50:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.658 19:50:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:57.658 19:50:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:57.658 19:50:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:57.658 19:50:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:57.658 19:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.658 19:50:38 -- common/autotest_common.sh@10 -- # set +x 00:19:57.917 nvme0n1 00:19:57.917 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.917 19:50:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.917 19:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.917 19:50:39 -- common/autotest_common.sh@10 -- # set +x 00:19:57.917 19:50:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:57.917 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.917 19:50:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.917 19:50:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.917 19:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.917 19:50:39 -- common/autotest_common.sh@10 -- # set +x 00:19:57.917 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.917 19:50:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:57.917 19:50:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:57.917 19:50:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:57.917 19:50:39 -- host/auth.sh@44 -- # digest=sha256 00:19:57.917 19:50:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:57.917 19:50:39 -- host/auth.sh@44 -- # keyid=1 00:19:57.917 19:50:39 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:19:57.917 19:50:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:57.917 19:50:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:57.917 19:50:39 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:19:57.917 19:50:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:19:57.917 19:50:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:57.917 19:50:39 -- host/auth.sh@68 -- # digest=sha256 00:19:57.917 19:50:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:57.917 19:50:39 -- host/auth.sh@68 -- # keyid=1 00:19:57.917 19:50:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:57.917 19:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.917 19:50:39 -- common/autotest_common.sh@10 -- # set +x 00:19:57.917 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.917 19:50:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:57.917 19:50:39 -- nvmf/common.sh@717 -- # local ip 00:19:57.917 19:50:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:57.917 19:50:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:57.917 19:50:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.917 19:50:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.917 19:50:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:57.917 19:50:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.917 19:50:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:57.917 19:50:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:57.917 19:50:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:57.917 19:50:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:57.917 19:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.917 19:50:39 -- common/autotest_common.sh@10 -- # set +x 00:19:58.175 nvme0n1 00:19:58.175 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.175 19:50:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.175 19:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.175 19:50:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:58.175 19:50:39 -- common/autotest_common.sh@10 -- # set +x 00:19:58.175 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.175 19:50:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.175 19:50:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.175 19:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.175 19:50:39 -- common/autotest_common.sh@10 -- # set +x 00:19:58.175 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.175 19:50:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:58.175 19:50:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:58.175 19:50:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:58.175 19:50:39 -- host/auth.sh@44 -- # digest=sha256 00:19:58.175 19:50:39 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:58.175 19:50:39 -- host/auth.sh@44 -- # keyid=2 00:19:58.175 19:50:39 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:19:58.175 19:50:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:58.435 19:50:39 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:58.435 19:50:39 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:19:58.435 19:50:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:19:58.435 19:50:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:58.435 19:50:39 -- host/auth.sh@68 -- # digest=sha256 00:19:58.435 19:50:39 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:58.435 19:50:39 -- host/auth.sh@68 -- # keyid=2 00:19:58.435 19:50:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.435 19:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.435 19:50:39 -- common/autotest_common.sh@10 -- # set +x 00:19:58.435 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.435 19:50:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:58.435 19:50:39 -- nvmf/common.sh@717 -- # local ip 00:19:58.435 19:50:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:58.435 19:50:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:58.435 19:50:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.435 19:50:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.435 19:50:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:58.435 19:50:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.435 19:50:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:58.435 19:50:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:58.435 19:50:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:58.435 19:50:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:58.435 19:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.435 19:50:39 -- common/autotest_common.sh@10 -- # set +x 00:19:58.695 nvme0n1 00:19:58.695 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.695 19:50:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.695 19:50:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:58.695 19:50:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.695 19:50:39 -- common/autotest_common.sh@10 -- # set +x 00:19:58.695 19:50:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.695 19:50:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.695 19:50:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.695 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.695 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.695 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.695 19:50:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:58.695 19:50:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:58.695 19:50:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:58.695 19:50:40 -- host/auth.sh@44 -- # digest=sha256 00:19:58.695 19:50:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:58.695 19:50:40 -- host/auth.sh@44 -- # keyid=3 00:19:58.695 19:50:40 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:19:58.695 19:50:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:58.695 19:50:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:58.695 19:50:40 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:19:58.695 19:50:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:19:58.695 19:50:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:58.695 19:50:40 -- host/auth.sh@68 -- # digest=sha256 00:19:58.695 19:50:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:58.695 19:50:40 -- host/auth.sh@68 -- # keyid=3 00:19:58.695 19:50:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.695 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.695 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.695 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.695 19:50:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:58.695 19:50:40 -- nvmf/common.sh@717 -- # local ip 00:19:58.695 19:50:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:58.695 19:50:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:58.695 19:50:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.695 19:50:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.695 19:50:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:58.695 19:50:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.695 19:50:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:58.695 19:50:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:58.695 19:50:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:58.695 19:50:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:58.695 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.695 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.955 nvme0n1 00:19:58.955 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.955 19:50:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.955 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.955 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.955 19:50:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:58.955 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.955 19:50:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.955 19:50:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.955 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.955 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.955 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.955 19:50:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:58.955 19:50:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:58.955 19:50:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:58.955 19:50:40 -- host/auth.sh@44 -- # digest=sha256 00:19:58.955 19:50:40 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:58.955 19:50:40 -- host/auth.sh@44 -- # keyid=4 00:19:58.955 19:50:40 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:19:58.955 19:50:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:58.955 19:50:40 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:58.955 19:50:40 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:19:58.955 19:50:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:19:58.955 19:50:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:58.955 19:50:40 -- host/auth.sh@68 -- # digest=sha256 00:19:58.955 19:50:40 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:58.955 19:50:40 -- host/auth.sh@68 -- # keyid=4 00:19:58.955 19:50:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:58.955 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.955 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.955 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.955 19:50:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:58.955 19:50:40 -- nvmf/common.sh@717 -- # local ip 00:19:58.955 19:50:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:58.955 19:50:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:58.955 19:50:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.955 19:50:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.955 19:50:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:58.955 19:50:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.955 19:50:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:58.955 19:50:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:58.955 19:50:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:58.955 19:50:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.955 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.955 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:59.213 nvme0n1 00:19:59.213 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.213 19:50:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.213 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.213 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:59.213 19:50:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:59.213 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.471 19:50:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.471 19:50:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.471 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.471 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:59.471 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.471 19:50:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.471 19:50:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:59.471 19:50:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:59.471 19:50:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:59.471 19:50:40 -- host/auth.sh@44 -- # digest=sha256 00:19:59.471 19:50:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:59.471 19:50:40 -- host/auth.sh@44 -- # keyid=0 00:19:59.471 19:50:40 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:19:59.471 19:50:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:59.471 19:50:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:59.471 19:50:40 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:19:59.471 19:50:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:19:59.471 19:50:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:59.471 19:50:40 -- host/auth.sh@68 -- # digest=sha256 00:19:59.471 19:50:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:59.471 19:50:40 -- host/auth.sh@68 -- # keyid=0 00:19:59.471 19:50:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:59.471 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.472 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:19:59.472 19:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.472 19:50:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:59.472 19:50:40 -- nvmf/common.sh@717 -- # local ip 00:19:59.472 19:50:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:59.472 19:50:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:59.472 19:50:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.472 19:50:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.472 19:50:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:59.472 19:50:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.472 19:50:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:59.472 19:50:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:59.472 19:50:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:59.472 19:50:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:59.472 19:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.472 19:50:40 -- common/autotest_common.sh@10 -- # set +x 00:20:00.038 nvme0n1 00:20:00.038 19:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.038 19:50:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.038 19:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.038 19:50:41 -- common/autotest_common.sh@10 -- # set +x 00:20:00.038 19:50:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.038 19:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.038 19:50:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.038 19:50:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.038 19:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.038 19:50:41 -- common/autotest_common.sh@10 -- # set +x 00:20:00.038 19:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.038 19:50:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.038 19:50:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:00.038 19:50:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.038 19:50:41 -- host/auth.sh@44 -- # digest=sha256 00:20:00.038 19:50:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.038 19:50:41 -- host/auth.sh@44 -- # keyid=1 00:20:00.038 19:50:41 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:00.038 19:50:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:00.038 19:50:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:00.038 19:50:41 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:00.038 19:50:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:20:00.038 19:50:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.038 19:50:41 -- host/auth.sh@68 -- # digest=sha256 00:20:00.038 19:50:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:00.038 19:50:41 -- host/auth.sh@68 -- # keyid=1 00:20:00.038 19:50:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.038 19:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.038 19:50:41 -- common/autotest_common.sh@10 -- # set +x 00:20:00.038 19:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.039 19:50:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.039 19:50:41 -- nvmf/common.sh@717 -- # local ip 00:20:00.039 19:50:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.039 19:50:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.039 19:50:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.039 19:50:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.039 19:50:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.039 19:50:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.039 19:50:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.039 19:50:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.039 19:50:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.039 19:50:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:00.039 19:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.039 19:50:41 -- common/autotest_common.sh@10 -- # set +x 00:20:00.609 nvme0n1 00:20:00.609 19:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.609 19:50:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.609 19:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.609 19:50:41 -- common/autotest_common.sh@10 -- # set +x 00:20:00.609 19:50:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:00.609 19:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.609 19:50:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.609 19:50:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.609 19:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.609 19:50:41 -- common/autotest_common.sh@10 -- # set +x 00:20:00.609 19:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.609 19:50:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:00.609 19:50:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:00.609 19:50:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:00.609 19:50:41 -- host/auth.sh@44 -- # digest=sha256 00:20:00.609 19:50:41 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:00.609 19:50:41 -- host/auth.sh@44 -- # keyid=2 00:20:00.609 19:50:41 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:00.609 19:50:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:00.609 19:50:41 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:00.609 19:50:41 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:00.609 19:50:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:20:00.609 19:50:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:00.609 19:50:41 -- host/auth.sh@68 -- # digest=sha256 00:20:00.609 19:50:41 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:00.609 19:50:41 -- host/auth.sh@68 -- # keyid=2 00:20:00.609 19:50:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.609 19:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.609 19:50:41 -- common/autotest_common.sh@10 -- # set +x 00:20:00.609 19:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.609 19:50:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:00.609 19:50:41 -- nvmf/common.sh@717 -- # local ip 00:20:00.609 19:50:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:00.609 19:50:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:00.609 19:50:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.610 19:50:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.610 19:50:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:00.610 19:50:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.610 19:50:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:00.610 19:50:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:00.610 19:50:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:00.610 19:50:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:00.610 19:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.610 19:50:41 -- common/autotest_common.sh@10 -- # set +x 00:20:01.177 nvme0n1 00:20:01.177 19:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.177 19:50:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.177 19:50:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.177 19:50:42 -- common/autotest_common.sh@10 -- # set +x 00:20:01.177 19:50:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:01.177 19:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.177 19:50:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.177 19:50:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.177 19:50:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.177 19:50:42 -- common/autotest_common.sh@10 -- # set +x 00:20:01.177 19:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.177 19:50:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:01.177 19:50:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:01.177 19:50:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:01.177 19:50:42 -- host/auth.sh@44 -- # digest=sha256 00:20:01.177 19:50:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.177 19:50:42 -- host/auth.sh@44 -- # keyid=3 00:20:01.177 19:50:42 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:01.177 19:50:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:01.177 19:50:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:01.177 19:50:42 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:01.177 19:50:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:20:01.177 19:50:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:01.177 19:50:42 -- host/auth.sh@68 -- # digest=sha256 00:20:01.177 19:50:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:01.177 19:50:42 -- host/auth.sh@68 -- # keyid=3 00:20:01.177 19:50:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.177 19:50:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.177 19:50:42 -- common/autotest_common.sh@10 -- # set +x 00:20:01.177 19:50:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.177 19:50:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:01.177 19:50:42 -- nvmf/common.sh@717 -- # local ip 00:20:01.177 19:50:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:01.177 19:50:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:01.177 19:50:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.177 19:50:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.177 19:50:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:01.177 19:50:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.177 19:50:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:01.177 19:50:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:01.177 19:50:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:01.177 19:50:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:01.177 19:50:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.177 19:50:42 -- common/autotest_common.sh@10 -- # set +x 00:20:01.747 nvme0n1 00:20:01.747 19:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.747 19:50:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.747 19:50:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:01.747 19:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.747 19:50:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.747 19:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.747 19:50:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.747 19:50:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.747 19:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.747 19:50:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.747 19:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.747 19:50:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:01.747 19:50:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:01.747 19:50:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:01.747 19:50:43 -- host/auth.sh@44 -- # digest=sha256 00:20:01.747 19:50:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:01.747 19:50:43 -- host/auth.sh@44 -- # keyid=4 00:20:01.747 19:50:43 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:01.747 19:50:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:01.747 19:50:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:01.747 19:50:43 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:01.747 19:50:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:20:01.747 19:50:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:01.747 19:50:43 -- host/auth.sh@68 -- # digest=sha256 00:20:01.747 19:50:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:01.747 19:50:43 -- host/auth.sh@68 -- # keyid=4 00:20:01.747 19:50:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.747 19:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.747 19:50:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.747 19:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.747 19:50:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:01.747 19:50:43 -- nvmf/common.sh@717 -- # local ip 00:20:01.747 19:50:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:01.747 19:50:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:01.747 19:50:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.747 19:50:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.747 19:50:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:01.747 19:50:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.747 19:50:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:01.747 19:50:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:01.747 19:50:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:01.747 19:50:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:01.747 19:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.747 19:50:43 -- common/autotest_common.sh@10 -- # set +x 00:20:02.315 nvme0n1 00:20:02.315 19:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.315 19:50:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.315 19:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.315 19:50:43 -- common/autotest_common.sh@10 -- # set +x 00:20:02.315 19:50:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:02.315 19:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.315 19:50:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.315 19:50:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.315 19:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.315 19:50:43 -- common/autotest_common.sh@10 -- # set +x 00:20:02.315 19:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.315 19:50:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.315 19:50:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:02.315 19:50:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:02.315 19:50:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:02.315 19:50:43 -- host/auth.sh@44 -- # digest=sha256 00:20:02.315 19:50:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.315 19:50:43 -- host/auth.sh@44 -- # keyid=0 00:20:02.315 19:50:43 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:02.315 19:50:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:02.315 19:50:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:02.315 19:50:43 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:02.315 19:50:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:20:02.315 19:50:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:02.315 19:50:43 -- host/auth.sh@68 -- # digest=sha256 00:20:02.315 19:50:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:02.315 19:50:43 -- host/auth.sh@68 -- # keyid=0 00:20:02.315 19:50:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:02.315 19:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.315 19:50:43 -- common/autotest_common.sh@10 -- # set +x 00:20:02.315 19:50:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.315 19:50:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:02.315 19:50:43 -- nvmf/common.sh@717 -- # local ip 00:20:02.315 19:50:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:02.315 19:50:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:02.315 19:50:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.315 19:50:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.315 19:50:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:02.315 19:50:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.315 19:50:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:02.315 19:50:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:02.315 19:50:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:02.315 19:50:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:02.315 19:50:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.315 19:50:43 -- common/autotest_common.sh@10 -- # set +x 00:20:03.254 nvme0n1 00:20:03.254 19:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.254 19:50:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.254 19:50:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:03.254 19:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.254 19:50:44 -- common/autotest_common.sh@10 -- # set +x 00:20:03.254 19:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.254 19:50:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.254 19:50:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.254 19:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.254 19:50:44 -- common/autotest_common.sh@10 -- # set +x 00:20:03.513 19:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.513 19:50:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:03.513 19:50:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:03.513 19:50:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:03.513 19:50:44 -- host/auth.sh@44 -- # digest=sha256 00:20:03.513 19:50:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:03.513 19:50:44 -- host/auth.sh@44 -- # keyid=1 00:20:03.513 19:50:44 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:03.513 19:50:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:03.513 19:50:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:03.513 19:50:44 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:03.513 19:50:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:20:03.513 19:50:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:03.513 19:50:44 -- host/auth.sh@68 -- # digest=sha256 00:20:03.513 19:50:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:03.513 19:50:44 -- host/auth.sh@68 -- # keyid=1 00:20:03.513 19:50:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.513 19:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.513 19:50:44 -- common/autotest_common.sh@10 -- # set +x 00:20:03.513 19:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.513 19:50:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:03.513 19:50:44 -- nvmf/common.sh@717 -- # local ip 00:20:03.513 19:50:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:03.513 19:50:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:03.513 19:50:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.513 19:50:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.513 19:50:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:03.513 19:50:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.513 19:50:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:03.513 19:50:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:03.513 19:50:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:03.513 19:50:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:03.513 19:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.513 19:50:44 -- common/autotest_common.sh@10 -- # set +x 00:20:04.452 nvme0n1 00:20:04.452 19:50:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.453 19:50:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.453 19:50:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.453 19:50:45 -- common/autotest_common.sh@10 -- # set +x 00:20:04.453 19:50:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:04.453 19:50:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.453 19:50:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.453 19:50:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.453 19:50:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.453 19:50:45 -- common/autotest_common.sh@10 -- # set +x 00:20:04.453 19:50:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.453 19:50:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:04.453 19:50:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:04.453 19:50:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:04.453 19:50:45 -- host/auth.sh@44 -- # digest=sha256 00:20:04.453 19:50:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:04.453 19:50:45 -- host/auth.sh@44 -- # keyid=2 00:20:04.453 19:50:45 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:04.453 19:50:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:04.453 19:50:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:04.453 19:50:45 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:04.453 19:50:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:20:04.453 19:50:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:04.453 19:50:45 -- host/auth.sh@68 -- # digest=sha256 00:20:04.453 19:50:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:04.453 19:50:45 -- host/auth.sh@68 -- # keyid=2 00:20:04.453 19:50:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:04.453 19:50:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.453 19:50:45 -- common/autotest_common.sh@10 -- # set +x 00:20:04.453 19:50:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.453 19:50:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:04.453 19:50:45 -- nvmf/common.sh@717 -- # local ip 00:20:04.453 19:50:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:04.453 19:50:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:04.453 19:50:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.453 19:50:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.453 19:50:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:04.453 19:50:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.453 19:50:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:04.453 19:50:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:04.453 19:50:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:04.453 19:50:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:04.453 19:50:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.453 19:50:45 -- common/autotest_common.sh@10 -- # set +x 00:20:05.390 nvme0n1 00:20:05.390 19:50:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.390 19:50:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.390 19:50:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:05.390 19:50:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.390 19:50:46 -- common/autotest_common.sh@10 -- # set +x 00:20:05.390 19:50:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.391 19:50:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.391 19:50:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.391 19:50:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.391 19:50:46 -- common/autotest_common.sh@10 -- # set +x 00:20:05.391 19:50:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.391 19:50:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:05.391 19:50:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:05.391 19:50:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:05.391 19:50:46 -- host/auth.sh@44 -- # digest=sha256 00:20:05.391 19:50:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:05.391 19:50:46 -- host/auth.sh@44 -- # keyid=3 00:20:05.391 19:50:46 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:05.391 19:50:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:05.391 19:50:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:05.391 19:50:46 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:05.391 19:50:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:20:05.391 19:50:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:05.391 19:50:46 -- host/auth.sh@68 -- # digest=sha256 00:20:05.391 19:50:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:05.391 19:50:46 -- host/auth.sh@68 -- # keyid=3 00:20:05.391 19:50:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:05.391 19:50:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.391 19:50:46 -- common/autotest_common.sh@10 -- # set +x 00:20:05.391 19:50:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.391 19:50:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:05.391 19:50:46 -- nvmf/common.sh@717 -- # local ip 00:20:05.391 19:50:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:05.391 19:50:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:05.391 19:50:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.391 19:50:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.391 19:50:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:05.391 19:50:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.391 19:50:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:05.391 19:50:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:05.391 19:50:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:05.391 19:50:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:05.391 19:50:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.391 19:50:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.332 nvme0n1 00:20:06.332 19:50:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.332 19:50:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.333 19:50:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.333 19:50:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.333 19:50:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:06.333 19:50:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.333 19:50:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.333 19:50:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.333 19:50:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.333 19:50:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.333 19:50:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.333 19:50:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:06.333 19:50:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:06.333 19:50:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:06.333 19:50:47 -- host/auth.sh@44 -- # digest=sha256 00:20:06.333 19:50:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.333 19:50:47 -- host/auth.sh@44 -- # keyid=4 00:20:06.333 19:50:47 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:06.333 19:50:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:06.333 19:50:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:06.333 19:50:47 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:06.333 19:50:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:20:06.333 19:50:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:06.333 19:50:47 -- host/auth.sh@68 -- # digest=sha256 00:20:06.333 19:50:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:06.333 19:50:47 -- host/auth.sh@68 -- # keyid=4 00:20:06.333 19:50:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.333 19:50:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.333 19:50:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.333 19:50:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.333 19:50:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:06.333 19:50:47 -- nvmf/common.sh@717 -- # local ip 00:20:06.333 19:50:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:06.333 19:50:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:06.333 19:50:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.333 19:50:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.333 19:50:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:06.333 19:50:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.333 19:50:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:06.333 19:50:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:06.333 19:50:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:06.333 19:50:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.333 19:50:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.333 19:50:47 -- common/autotest_common.sh@10 -- # set +x 00:20:07.269 nvme0n1 00:20:07.269 19:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.269 19:50:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.269 19:50:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:07.269 19:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.269 19:50:48 -- common/autotest_common.sh@10 -- # set +x 00:20:07.269 19:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.269 19:50:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.269 19:50:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.269 19:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.269 19:50:48 -- common/autotest_common.sh@10 -- # set +x 00:20:07.269 19:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.269 19:50:48 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:07.269 19:50:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.269 19:50:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:07.269 19:50:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:07.269 19:50:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:07.269 19:50:48 -- host/auth.sh@44 -- # digest=sha384 00:20:07.269 19:50:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.269 19:50:48 -- host/auth.sh@44 -- # keyid=0 00:20:07.269 19:50:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:07.269 19:50:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:07.269 19:50:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:07.269 19:50:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:07.269 19:50:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:20:07.269 19:50:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:07.269 19:50:48 -- host/auth.sh@68 -- # digest=sha384 00:20:07.269 19:50:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:07.269 19:50:48 -- host/auth.sh@68 -- # keyid=0 00:20:07.269 19:50:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.269 19:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.269 19:50:48 -- common/autotest_common.sh@10 -- # set +x 00:20:07.270 19:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.270 19:50:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:07.270 19:50:48 -- nvmf/common.sh@717 -- # local ip 00:20:07.270 19:50:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:07.270 19:50:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:07.270 19:50:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.270 19:50:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.270 19:50:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:07.270 19:50:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.270 19:50:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:07.270 19:50:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:07.270 19:50:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:07.270 19:50:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:07.270 19:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.270 19:50:48 -- common/autotest_common.sh@10 -- # set +x 00:20:07.528 nvme0n1 00:20:07.528 19:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.528 19:50:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.528 19:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.528 19:50:48 -- common/autotest_common.sh@10 -- # set +x 00:20:07.528 19:50:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:07.528 19:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.528 19:50:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.528 19:50:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.528 19:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.528 19:50:48 -- common/autotest_common.sh@10 -- # set +x 00:20:07.528 19:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.528 19:50:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:07.528 19:50:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:07.528 19:50:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:07.528 19:50:48 -- host/auth.sh@44 -- # digest=sha384 00:20:07.528 19:50:48 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.528 19:50:48 -- host/auth.sh@44 -- # keyid=1 00:20:07.528 19:50:48 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:07.528 19:50:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:07.528 19:50:48 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:07.528 19:50:48 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:07.528 19:50:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:20:07.528 19:50:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:07.528 19:50:48 -- host/auth.sh@68 -- # digest=sha384 00:20:07.528 19:50:48 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:07.528 19:50:48 -- host/auth.sh@68 -- # keyid=1 00:20:07.528 19:50:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.528 19:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.528 19:50:48 -- common/autotest_common.sh@10 -- # set +x 00:20:07.528 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.528 19:50:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:07.528 19:50:49 -- nvmf/common.sh@717 -- # local ip 00:20:07.528 19:50:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:07.528 19:50:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:07.528 19:50:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.528 19:50:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.528 19:50:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:07.528 19:50:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.528 19:50:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:07.528 19:50:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:07.528 19:50:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:07.528 19:50:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:07.528 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.528 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:07.786 nvme0n1 00:20:07.786 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.787 19:50:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.787 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.787 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:07.787 19:50:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:07.787 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.787 19:50:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.787 19:50:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.787 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.787 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:07.787 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.787 19:50:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:07.787 19:50:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:07.787 19:50:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:07.787 19:50:49 -- host/auth.sh@44 -- # digest=sha384 00:20:07.787 19:50:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:07.787 19:50:49 -- host/auth.sh@44 -- # keyid=2 00:20:07.787 19:50:49 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:07.787 19:50:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:07.787 19:50:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:07.787 19:50:49 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:07.787 19:50:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:20:07.787 19:50:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:07.787 19:50:49 -- host/auth.sh@68 -- # digest=sha384 00:20:07.787 19:50:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:07.787 19:50:49 -- host/auth.sh@68 -- # keyid=2 00:20:07.787 19:50:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:07.787 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.787 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:07.787 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.787 19:50:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:07.787 19:50:49 -- nvmf/common.sh@717 -- # local ip 00:20:07.787 19:50:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:07.787 19:50:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:07.787 19:50:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.787 19:50:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.787 19:50:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:07.787 19:50:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.787 19:50:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:07.787 19:50:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:07.787 19:50:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:07.787 19:50:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:07.787 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.787 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.045 nvme0n1 00:20:08.045 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.045 19:50:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.045 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.045 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.045 19:50:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:08.045 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.045 19:50:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.045 19:50:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.045 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.045 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.045 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.045 19:50:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:08.045 19:50:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:08.045 19:50:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:08.045 19:50:49 -- host/auth.sh@44 -- # digest=sha384 00:20:08.045 19:50:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:08.045 19:50:49 -- host/auth.sh@44 -- # keyid=3 00:20:08.045 19:50:49 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:08.045 19:50:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:08.045 19:50:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:08.045 19:50:49 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:08.045 19:50:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:20:08.045 19:50:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:08.045 19:50:49 -- host/auth.sh@68 -- # digest=sha384 00:20:08.045 19:50:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:08.045 19:50:49 -- host/auth.sh@68 -- # keyid=3 00:20:08.045 19:50:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.045 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.045 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.045 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.045 19:50:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:08.045 19:50:49 -- nvmf/common.sh@717 -- # local ip 00:20:08.045 19:50:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:08.045 19:50:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:08.045 19:50:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.045 19:50:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.045 19:50:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:08.045 19:50:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.045 19:50:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:08.045 19:50:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:08.045 19:50:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:08.045 19:50:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:08.045 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.045 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.304 nvme0n1 00:20:08.304 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.304 19:50:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.304 19:50:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:08.304 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.304 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.304 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.304 19:50:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.304 19:50:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.304 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.304 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.304 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.304 19:50:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:08.304 19:50:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:08.304 19:50:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:08.304 19:50:49 -- host/auth.sh@44 -- # digest=sha384 00:20:08.304 19:50:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:08.304 19:50:49 -- host/auth.sh@44 -- # keyid=4 00:20:08.304 19:50:49 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:08.304 19:50:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:08.304 19:50:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:08.304 19:50:49 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:08.304 19:50:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:20:08.304 19:50:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:08.304 19:50:49 -- host/auth.sh@68 -- # digest=sha384 00:20:08.304 19:50:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:08.304 19:50:49 -- host/auth.sh@68 -- # keyid=4 00:20:08.304 19:50:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.304 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.304 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.304 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.304 19:50:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:08.304 19:50:49 -- nvmf/common.sh@717 -- # local ip 00:20:08.304 19:50:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:08.304 19:50:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:08.304 19:50:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.304 19:50:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.304 19:50:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:08.304 19:50:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.304 19:50:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:08.304 19:50:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:08.304 19:50:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:08.304 19:50:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:08.304 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.304 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.563 nvme0n1 00:20:08.563 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.563 19:50:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.563 19:50:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:08.563 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.563 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.563 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.563 19:50:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.563 19:50:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.563 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.563 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.563 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.563 19:50:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.563 19:50:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:08.563 19:50:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:08.563 19:50:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:08.563 19:50:49 -- host/auth.sh@44 -- # digest=sha384 00:20:08.563 19:50:49 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:08.563 19:50:49 -- host/auth.sh@44 -- # keyid=0 00:20:08.563 19:50:49 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:08.563 19:50:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:08.563 19:50:49 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:08.563 19:50:49 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:08.563 19:50:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:20:08.563 19:50:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:08.563 19:50:49 -- host/auth.sh@68 -- # digest=sha384 00:20:08.563 19:50:49 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:08.563 19:50:49 -- host/auth.sh@68 -- # keyid=0 00:20:08.563 19:50:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.563 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.563 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.563 19:50:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.563 19:50:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:08.563 19:50:49 -- nvmf/common.sh@717 -- # local ip 00:20:08.563 19:50:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:08.563 19:50:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:08.563 19:50:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.563 19:50:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.563 19:50:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:08.563 19:50:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.563 19:50:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:08.563 19:50:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:08.563 19:50:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:08.563 19:50:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:08.563 19:50:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.563 19:50:49 -- common/autotest_common.sh@10 -- # set +x 00:20:08.822 nvme0n1 00:20:08.822 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.822 19:50:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.822 19:50:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:08.822 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.822 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.822 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.822 19:50:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.822 19:50:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.823 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.823 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.823 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.823 19:50:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:08.823 19:50:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:08.823 19:50:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:08.823 19:50:50 -- host/auth.sh@44 -- # digest=sha384 00:20:08.823 19:50:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:08.823 19:50:50 -- host/auth.sh@44 -- # keyid=1 00:20:08.823 19:50:50 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:08.823 19:50:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:08.823 19:50:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:08.823 19:50:50 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:08.823 19:50:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:20:08.823 19:50:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:08.823 19:50:50 -- host/auth.sh@68 -- # digest=sha384 00:20:08.823 19:50:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:08.823 19:50:50 -- host/auth.sh@68 -- # keyid=1 00:20:08.823 19:50:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.823 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.823 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:08.823 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:08.823 19:50:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:08.823 19:50:50 -- nvmf/common.sh@717 -- # local ip 00:20:08.823 19:50:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:08.823 19:50:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:08.823 19:50:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.823 19:50:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.823 19:50:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:08.823 19:50:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.823 19:50:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:08.823 19:50:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:08.823 19:50:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:08.823 19:50:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:08.823 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:08.823 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.082 nvme0n1 00:20:09.082 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.082 19:50:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.082 19:50:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:09.082 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.082 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.082 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.082 19:50:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.082 19:50:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.082 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.082 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.082 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.082 19:50:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:09.082 19:50:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:09.082 19:50:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:09.082 19:50:50 -- host/auth.sh@44 -- # digest=sha384 00:20:09.082 19:50:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:09.082 19:50:50 -- host/auth.sh@44 -- # keyid=2 00:20:09.082 19:50:50 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:09.082 19:50:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:09.082 19:50:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:09.082 19:50:50 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:09.082 19:50:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:20:09.082 19:50:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:09.082 19:50:50 -- host/auth.sh@68 -- # digest=sha384 00:20:09.082 19:50:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:09.082 19:50:50 -- host/auth.sh@68 -- # keyid=2 00:20:09.082 19:50:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.082 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.082 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.082 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.082 19:50:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:09.082 19:50:50 -- nvmf/common.sh@717 -- # local ip 00:20:09.082 19:50:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:09.082 19:50:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:09.082 19:50:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.082 19:50:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.082 19:50:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:09.082 19:50:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.082 19:50:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:09.082 19:50:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:09.082 19:50:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:09.082 19:50:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:09.082 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.082 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.340 nvme0n1 00:20:09.340 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.340 19:50:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.340 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.340 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.340 19:50:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:09.340 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.340 19:50:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.340 19:50:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.340 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.340 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.340 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.340 19:50:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:09.340 19:50:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:09.340 19:50:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:09.340 19:50:50 -- host/auth.sh@44 -- # digest=sha384 00:20:09.340 19:50:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:09.340 19:50:50 -- host/auth.sh@44 -- # keyid=3 00:20:09.340 19:50:50 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:09.340 19:50:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:09.340 19:50:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:09.340 19:50:50 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:09.340 19:50:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:20:09.340 19:50:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:09.340 19:50:50 -- host/auth.sh@68 -- # digest=sha384 00:20:09.340 19:50:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:09.340 19:50:50 -- host/auth.sh@68 -- # keyid=3 00:20:09.340 19:50:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.340 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.340 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.340 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.340 19:50:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:09.340 19:50:50 -- nvmf/common.sh@717 -- # local ip 00:20:09.340 19:50:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:09.340 19:50:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:09.340 19:50:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.340 19:50:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.340 19:50:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:09.340 19:50:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.340 19:50:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:09.340 19:50:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:09.340 19:50:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:09.340 19:50:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:09.340 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.340 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.599 nvme0n1 00:20:09.599 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.599 19:50:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.599 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.599 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.599 19:50:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:09.599 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.599 19:50:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.599 19:50:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.599 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.599 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.599 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.599 19:50:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:09.599 19:50:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:09.599 19:50:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:09.599 19:50:50 -- host/auth.sh@44 -- # digest=sha384 00:20:09.599 19:50:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:09.599 19:50:50 -- host/auth.sh@44 -- # keyid=4 00:20:09.599 19:50:50 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:09.599 19:50:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:09.599 19:50:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:09.599 19:50:50 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:09.599 19:50:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:20:09.599 19:50:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:09.599 19:50:50 -- host/auth.sh@68 -- # digest=sha384 00:20:09.599 19:50:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:09.599 19:50:50 -- host/auth.sh@68 -- # keyid=4 00:20:09.599 19:50:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.599 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.599 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.599 19:50:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.599 19:50:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:09.599 19:50:50 -- nvmf/common.sh@717 -- # local ip 00:20:09.599 19:50:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:09.599 19:50:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:09.599 19:50:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.599 19:50:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.599 19:50:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:09.599 19:50:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.599 19:50:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:09.599 19:50:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:09.599 19:50:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:09.599 19:50:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.599 19:50:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.599 19:50:50 -- common/autotest_common.sh@10 -- # set +x 00:20:09.857 nvme0n1 00:20:09.857 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.857 19:50:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.857 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.857 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:09.857 19:50:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:09.857 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.857 19:50:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.857 19:50:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.857 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.857 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:09.857 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.857 19:50:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.857 19:50:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:09.857 19:50:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:09.857 19:50:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:09.857 19:50:51 -- host/auth.sh@44 -- # digest=sha384 00:20:09.857 19:50:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:09.857 19:50:51 -- host/auth.sh@44 -- # keyid=0 00:20:09.857 19:50:51 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:09.857 19:50:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:09.857 19:50:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:09.857 19:50:51 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:09.857 19:50:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:20:09.857 19:50:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:09.857 19:50:51 -- host/auth.sh@68 -- # digest=sha384 00:20:09.857 19:50:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:09.857 19:50:51 -- host/auth.sh@68 -- # keyid=0 00:20:09.857 19:50:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:09.857 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.857 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:09.857 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.857 19:50:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:09.857 19:50:51 -- nvmf/common.sh@717 -- # local ip 00:20:09.857 19:50:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:09.857 19:50:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:09.857 19:50:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.857 19:50:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.857 19:50:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:09.857 19:50:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.857 19:50:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:09.857 19:50:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:09.857 19:50:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:09.857 19:50:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:09.857 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.857 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.116 nvme0n1 00:20:10.116 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.116 19:50:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.116 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.116 19:50:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:10.116 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.116 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.116 19:50:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.116 19:50:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.116 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.116 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.116 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.116 19:50:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:10.116 19:50:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:10.116 19:50:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:10.116 19:50:51 -- host/auth.sh@44 -- # digest=sha384 00:20:10.116 19:50:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:10.116 19:50:51 -- host/auth.sh@44 -- # keyid=1 00:20:10.116 19:50:51 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:10.116 19:50:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:10.116 19:50:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:10.116 19:50:51 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:10.116 19:50:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:20:10.116 19:50:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:10.116 19:50:51 -- host/auth.sh@68 -- # digest=sha384 00:20:10.116 19:50:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:10.116 19:50:51 -- host/auth.sh@68 -- # keyid=1 00:20:10.116 19:50:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:10.116 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.116 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.116 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.116 19:50:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:10.116 19:50:51 -- nvmf/common.sh@717 -- # local ip 00:20:10.116 19:50:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.116 19:50:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.116 19:50:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.116 19:50:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.116 19:50:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.116 19:50:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.116 19:50:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.116 19:50:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.116 19:50:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.116 19:50:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:10.116 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.116 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.375 nvme0n1 00:20:10.375 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.375 19:50:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.375 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.375 19:50:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:10.375 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.635 19:50:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.635 19:50:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.635 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.635 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.635 19:50:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:10.635 19:50:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:10.635 19:50:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:10.635 19:50:51 -- host/auth.sh@44 -- # digest=sha384 00:20:10.635 19:50:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:10.635 19:50:51 -- host/auth.sh@44 -- # keyid=2 00:20:10.635 19:50:51 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:10.635 19:50:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:10.635 19:50:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:10.635 19:50:51 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:10.635 19:50:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:20:10.635 19:50:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:10.635 19:50:51 -- host/auth.sh@68 -- # digest=sha384 00:20:10.635 19:50:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:10.635 19:50:51 -- host/auth.sh@68 -- # keyid=2 00:20:10.635 19:50:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:10.635 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.635 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.635 19:50:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.635 19:50:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:10.635 19:50:51 -- nvmf/common.sh@717 -- # local ip 00:20:10.635 19:50:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.635 19:50:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.635 19:50:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.635 19:50:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.635 19:50:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.635 19:50:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.635 19:50:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.635 19:50:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.635 19:50:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.635 19:50:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:10.635 19:50:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.635 19:50:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.899 nvme0n1 00:20:10.899 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.899 19:50:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.899 19:50:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:10.899 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.899 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:10.899 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.899 19:50:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.899 19:50:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.899 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.899 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:10.899 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.899 19:50:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:10.899 19:50:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:10.899 19:50:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:10.899 19:50:52 -- host/auth.sh@44 -- # digest=sha384 00:20:10.899 19:50:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:10.899 19:50:52 -- host/auth.sh@44 -- # keyid=3 00:20:10.899 19:50:52 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:10.899 19:50:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:10.899 19:50:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:10.899 19:50:52 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:10.899 19:50:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:20:10.899 19:50:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:10.899 19:50:52 -- host/auth.sh@68 -- # digest=sha384 00:20:10.899 19:50:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:10.899 19:50:52 -- host/auth.sh@68 -- # keyid=3 00:20:10.899 19:50:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:10.899 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.899 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:10.899 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:10.899 19:50:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:10.899 19:50:52 -- nvmf/common.sh@717 -- # local ip 00:20:10.899 19:50:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:10.899 19:50:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:10.899 19:50:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.899 19:50:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.899 19:50:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:10.899 19:50:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.899 19:50:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:10.899 19:50:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:10.899 19:50:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:10.899 19:50:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:10.899 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:10.899 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.157 nvme0n1 00:20:11.157 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.157 19:50:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.157 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.157 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.157 19:50:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:11.157 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.157 19:50:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.157 19:50:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.157 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.157 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.157 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.157 19:50:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:11.157 19:50:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:11.157 19:50:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:11.157 19:50:52 -- host/auth.sh@44 -- # digest=sha384 00:20:11.157 19:50:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:11.157 19:50:52 -- host/auth.sh@44 -- # keyid=4 00:20:11.157 19:50:52 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:11.157 19:50:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:11.157 19:50:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:11.157 19:50:52 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:11.157 19:50:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:20:11.157 19:50:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:11.157 19:50:52 -- host/auth.sh@68 -- # digest=sha384 00:20:11.157 19:50:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:11.157 19:50:52 -- host/auth.sh@68 -- # keyid=4 00:20:11.157 19:50:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:11.157 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.157 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.157 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.157 19:50:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:11.157 19:50:52 -- nvmf/common.sh@717 -- # local ip 00:20:11.157 19:50:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:11.157 19:50:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:11.157 19:50:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.157 19:50:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.157 19:50:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:11.157 19:50:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.157 19:50:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:11.157 19:50:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:11.157 19:50:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:11.157 19:50:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:11.157 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.157 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.723 nvme0n1 00:20:11.723 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.723 19:50:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.723 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.723 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.723 19:50:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:11.723 19:50:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.723 19:50:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.723 19:50:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.723 19:50:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.724 19:50:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.724 19:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.724 19:50:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.724 19:50:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:11.724 19:50:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:11.724 19:50:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:11.724 19:50:53 -- host/auth.sh@44 -- # digest=sha384 00:20:11.724 19:50:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:11.724 19:50:53 -- host/auth.sh@44 -- # keyid=0 00:20:11.724 19:50:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:11.724 19:50:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:11.724 19:50:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:11.724 19:50:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:11.724 19:50:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:20:11.724 19:50:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:11.724 19:50:53 -- host/auth.sh@68 -- # digest=sha384 00:20:11.724 19:50:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:11.724 19:50:53 -- host/auth.sh@68 -- # keyid=0 00:20:11.724 19:50:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:11.724 19:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.724 19:50:53 -- common/autotest_common.sh@10 -- # set +x 00:20:11.724 19:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.724 19:50:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:11.724 19:50:53 -- nvmf/common.sh@717 -- # local ip 00:20:11.724 19:50:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:11.724 19:50:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:11.724 19:50:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.724 19:50:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.724 19:50:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:11.724 19:50:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.724 19:50:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:11.724 19:50:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:11.724 19:50:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:11.724 19:50:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:11.724 19:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.724 19:50:53 -- common/autotest_common.sh@10 -- # set +x 00:20:12.291 nvme0n1 00:20:12.291 19:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.291 19:50:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.291 19:50:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:12.291 19:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.291 19:50:53 -- common/autotest_common.sh@10 -- # set +x 00:20:12.291 19:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.291 19:50:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.291 19:50:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.291 19:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.291 19:50:53 -- common/autotest_common.sh@10 -- # set +x 00:20:12.291 19:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.291 19:50:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:12.291 19:50:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:12.291 19:50:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:12.291 19:50:53 -- host/auth.sh@44 -- # digest=sha384 00:20:12.291 19:50:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:12.291 19:50:53 -- host/auth.sh@44 -- # keyid=1 00:20:12.291 19:50:53 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:12.291 19:50:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:12.291 19:50:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:12.291 19:50:53 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:12.291 19:50:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:20:12.291 19:50:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:12.291 19:50:53 -- host/auth.sh@68 -- # digest=sha384 00:20:12.291 19:50:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:12.291 19:50:53 -- host/auth.sh@68 -- # keyid=1 00:20:12.291 19:50:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.291 19:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.291 19:50:53 -- common/autotest_common.sh@10 -- # set +x 00:20:12.291 19:50:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.291 19:50:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:12.291 19:50:53 -- nvmf/common.sh@717 -- # local ip 00:20:12.291 19:50:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:12.291 19:50:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:12.291 19:50:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.291 19:50:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.291 19:50:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:12.291 19:50:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.291 19:50:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:12.291 19:50:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:12.291 19:50:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:12.291 19:50:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:12.291 19:50:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.291 19:50:53 -- common/autotest_common.sh@10 -- # set +x 00:20:12.857 nvme0n1 00:20:12.857 19:50:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.857 19:50:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.857 19:50:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.857 19:50:54 -- common/autotest_common.sh@10 -- # set +x 00:20:12.857 19:50:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:12.857 19:50:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.857 19:50:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.857 19:50:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.857 19:50:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.857 19:50:54 -- common/autotest_common.sh@10 -- # set +x 00:20:12.857 19:50:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.857 19:50:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:12.857 19:50:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:12.857 19:50:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:12.857 19:50:54 -- host/auth.sh@44 -- # digest=sha384 00:20:12.857 19:50:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:12.857 19:50:54 -- host/auth.sh@44 -- # keyid=2 00:20:12.857 19:50:54 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:12.857 19:50:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:12.857 19:50:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:12.857 19:50:54 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:12.857 19:50:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:20:12.857 19:50:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:12.857 19:50:54 -- host/auth.sh@68 -- # digest=sha384 00:20:12.857 19:50:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:12.857 19:50:54 -- host/auth.sh@68 -- # keyid=2 00:20:12.857 19:50:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.858 19:50:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.858 19:50:54 -- common/autotest_common.sh@10 -- # set +x 00:20:12.858 19:50:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.858 19:50:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:12.858 19:50:54 -- nvmf/common.sh@717 -- # local ip 00:20:12.858 19:50:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:12.858 19:50:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:12.858 19:50:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.858 19:50:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.858 19:50:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:12.858 19:50:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.858 19:50:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:12.858 19:50:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:12.858 19:50:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:12.858 19:50:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:12.858 19:50:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.858 19:50:54 -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 nvme0n1 00:20:13.424 19:50:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.424 19:50:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.424 19:50:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.424 19:50:54 -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 19:50:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:13.424 19:50:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.424 19:50:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.424 19:50:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.424 19:50:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.424 19:50:54 -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 19:50:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.424 19:50:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:13.424 19:50:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:13.424 19:50:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:13.424 19:50:54 -- host/auth.sh@44 -- # digest=sha384 00:20:13.424 19:50:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:13.424 19:50:54 -- host/auth.sh@44 -- # keyid=3 00:20:13.424 19:50:54 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:13.424 19:50:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:13.424 19:50:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:13.424 19:50:54 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:13.424 19:50:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:20:13.424 19:50:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:13.424 19:50:54 -- host/auth.sh@68 -- # digest=sha384 00:20:13.424 19:50:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:13.424 19:50:54 -- host/auth.sh@68 -- # keyid=3 00:20:13.424 19:50:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.424 19:50:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.424 19:50:54 -- common/autotest_common.sh@10 -- # set +x 00:20:13.424 19:50:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.424 19:50:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:13.424 19:50:54 -- nvmf/common.sh@717 -- # local ip 00:20:13.424 19:50:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:13.424 19:50:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:13.424 19:50:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.424 19:50:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.424 19:50:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:13.424 19:50:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.424 19:50:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:13.424 19:50:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:13.424 19:50:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:13.424 19:50:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:13.424 19:50:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.424 19:50:54 -- common/autotest_common.sh@10 -- # set +x 00:20:13.990 nvme0n1 00:20:13.990 19:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.990 19:50:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.990 19:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.990 19:50:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:13.990 19:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:13.990 19:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.990 19:50:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.990 19:50:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.990 19:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.990 19:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:13.990 19:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.990 19:50:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:13.990 19:50:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:13.990 19:50:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:13.990 19:50:55 -- host/auth.sh@44 -- # digest=sha384 00:20:13.990 19:50:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:13.990 19:50:55 -- host/auth.sh@44 -- # keyid=4 00:20:13.990 19:50:55 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:13.990 19:50:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:13.990 19:50:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:13.990 19:50:55 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:13.991 19:50:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:20:13.991 19:50:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:13.991 19:50:55 -- host/auth.sh@68 -- # digest=sha384 00:20:13.991 19:50:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:13.991 19:50:55 -- host/auth.sh@68 -- # keyid=4 00:20:13.991 19:50:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.991 19:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.991 19:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:13.991 19:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.991 19:50:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:13.991 19:50:55 -- nvmf/common.sh@717 -- # local ip 00:20:13.991 19:50:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:13.991 19:50:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:13.991 19:50:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.991 19:50:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.991 19:50:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:13.991 19:50:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.991 19:50:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:13.991 19:50:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:13.991 19:50:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:13.991 19:50:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:13.991 19:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.991 19:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.556 nvme0n1 00:20:14.556 19:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.556 19:50:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.556 19:50:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.556 19:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.556 19:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.556 19:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.556 19:50:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.556 19:50:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.556 19:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.556 19:50:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.556 19:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.556 19:50:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.556 19:50:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.556 19:50:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:14.556 19:50:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.556 19:50:56 -- host/auth.sh@44 -- # digest=sha384 00:20:14.556 19:50:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:14.556 19:50:56 -- host/auth.sh@44 -- # keyid=0 00:20:14.556 19:50:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:14.556 19:50:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:14.556 19:50:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:14.556 19:50:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:14.556 19:50:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:20:14.556 19:50:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.556 19:50:56 -- host/auth.sh@68 -- # digest=sha384 00:20:14.556 19:50:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:14.556 19:50:56 -- host/auth.sh@68 -- # keyid=0 00:20:14.556 19:50:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:14.557 19:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.557 19:50:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.557 19:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.557 19:50:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.557 19:50:56 -- nvmf/common.sh@717 -- # local ip 00:20:14.557 19:50:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.557 19:50:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.557 19:50:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.557 19:50:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.557 19:50:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:14.557 19:50:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.557 19:50:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:14.557 19:50:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:14.557 19:50:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:14.557 19:50:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:14.557 19:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.557 19:50:56 -- common/autotest_common.sh@10 -- # set +x 00:20:15.490 nvme0n1 00:20:15.490 19:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.490 19:50:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.490 19:50:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.490 19:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.490 19:50:56 -- common/autotest_common.sh@10 -- # set +x 00:20:15.490 19:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.490 19:50:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.490 19:50:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.490 19:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.490 19:50:56 -- common/autotest_common.sh@10 -- # set +x 00:20:15.750 19:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.750 19:50:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.750 19:50:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:15.750 19:50:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.750 19:50:57 -- host/auth.sh@44 -- # digest=sha384 00:20:15.750 19:50:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:15.750 19:50:57 -- host/auth.sh@44 -- # keyid=1 00:20:15.750 19:50:57 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:15.750 19:50:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:15.750 19:50:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:15.750 19:50:57 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:15.750 19:50:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:20:15.750 19:50:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.750 19:50:57 -- host/auth.sh@68 -- # digest=sha384 00:20:15.750 19:50:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:15.750 19:50:57 -- host/auth.sh@68 -- # keyid=1 00:20:15.750 19:50:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:15.750 19:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.750 19:50:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.750 19:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.750 19:50:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:15.750 19:50:57 -- nvmf/common.sh@717 -- # local ip 00:20:15.750 19:50:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:15.750 19:50:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:15.750 19:50:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.750 19:50:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.750 19:50:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:15.750 19:50:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.750 19:50:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:15.750 19:50:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:15.750 19:50:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:15.750 19:50:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:15.750 19:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.750 19:50:57 -- common/autotest_common.sh@10 -- # set +x 00:20:16.685 nvme0n1 00:20:16.685 19:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.685 19:50:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.685 19:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.685 19:50:57 -- common/autotest_common.sh@10 -- # set +x 00:20:16.685 19:50:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:16.685 19:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.685 19:50:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.685 19:50:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.685 19:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.685 19:50:57 -- common/autotest_common.sh@10 -- # set +x 00:20:16.685 19:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.685 19:50:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:16.685 19:50:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:16.685 19:50:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:16.685 19:50:57 -- host/auth.sh@44 -- # digest=sha384 00:20:16.685 19:50:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:16.685 19:50:57 -- host/auth.sh@44 -- # keyid=2 00:20:16.685 19:50:57 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:16.685 19:50:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:16.685 19:50:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:16.685 19:50:57 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:16.685 19:50:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:20:16.685 19:50:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:16.685 19:50:57 -- host/auth.sh@68 -- # digest=sha384 00:20:16.685 19:50:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:16.685 19:50:57 -- host/auth.sh@68 -- # keyid=2 00:20:16.685 19:50:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:16.685 19:50:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.685 19:50:58 -- common/autotest_common.sh@10 -- # set +x 00:20:16.685 19:50:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.685 19:50:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:16.685 19:50:58 -- nvmf/common.sh@717 -- # local ip 00:20:16.685 19:50:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:16.685 19:50:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:16.685 19:50:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.685 19:50:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.685 19:50:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:16.685 19:50:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.685 19:50:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:16.685 19:50:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:16.685 19:50:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:16.685 19:50:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:16.685 19:50:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.685 19:50:58 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 nvme0n1 00:20:17.620 19:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.620 19:50:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.620 19:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.620 19:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 19:50:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:17.620 19:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.620 19:50:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.620 19:50:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.620 19:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.620 19:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 19:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.620 19:50:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:17.620 19:50:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:17.620 19:50:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:17.620 19:50:59 -- host/auth.sh@44 -- # digest=sha384 00:20:17.620 19:50:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:17.620 19:50:59 -- host/auth.sh@44 -- # keyid=3 00:20:17.620 19:50:59 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:17.620 19:50:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:17.620 19:50:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:17.620 19:50:59 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:17.620 19:50:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:20:17.620 19:50:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:17.620 19:50:59 -- host/auth.sh@68 -- # digest=sha384 00:20:17.620 19:50:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:17.620 19:50:59 -- host/auth.sh@68 -- # keyid=3 00:20:17.620 19:50:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.620 19:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.620 19:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 19:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.620 19:50:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:17.620 19:50:59 -- nvmf/common.sh@717 -- # local ip 00:20:17.620 19:50:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:17.620 19:50:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:17.620 19:50:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.620 19:50:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.620 19:50:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:17.620 19:50:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.620 19:50:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:17.620 19:50:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:17.620 19:50:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:17.620 19:50:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:17.620 19:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.620 19:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:18.554 nvme0n1 00:20:18.554 19:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.554 19:50:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.554 19:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.554 19:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:18.554 19:50:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:18.554 19:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.554 19:50:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.554 19:50:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.554 19:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.554 19:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:18.554 19:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.554 19:50:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:18.554 19:50:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:18.554 19:50:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:18.554 19:50:59 -- host/auth.sh@44 -- # digest=sha384 00:20:18.554 19:50:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:18.554 19:50:59 -- host/auth.sh@44 -- # keyid=4 00:20:18.554 19:50:59 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:18.554 19:50:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:18.554 19:50:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:18.554 19:50:59 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:18.554 19:50:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:20:18.554 19:50:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:18.554 19:50:59 -- host/auth.sh@68 -- # digest=sha384 00:20:18.554 19:50:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:18.554 19:50:59 -- host/auth.sh@68 -- # keyid=4 00:20:18.554 19:50:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.554 19:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.554 19:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:18.554 19:50:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.554 19:50:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:18.554 19:50:59 -- nvmf/common.sh@717 -- # local ip 00:20:18.554 19:50:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:18.554 19:50:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:18.554 19:50:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.554 19:50:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.554 19:50:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:18.554 19:50:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.554 19:50:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:18.554 19:50:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:18.554 19:50:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:18.554 19:50:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:18.554 19:50:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.554 19:50:59 -- common/autotest_common.sh@10 -- # set +x 00:20:19.490 nvme0n1 00:20:19.490 19:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.490 19:51:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.490 19:51:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.490 19:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.490 19:51:00 -- common/autotest_common.sh@10 -- # set +x 00:20:19.490 19:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.490 19:51:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.490 19:51:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.490 19:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.490 19:51:00 -- common/autotest_common.sh@10 -- # set +x 00:20:19.490 19:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.490 19:51:00 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:19.490 19:51:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.490 19:51:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:19.490 19:51:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:19.490 19:51:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:19.490 19:51:00 -- host/auth.sh@44 -- # digest=sha512 00:20:19.490 19:51:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:19.490 19:51:00 -- host/auth.sh@44 -- # keyid=0 00:20:19.490 19:51:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:19.490 19:51:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:19.490 19:51:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:19.490 19:51:00 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:19.490 19:51:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:20:19.490 19:51:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.490 19:51:00 -- host/auth.sh@68 -- # digest=sha512 00:20:19.490 19:51:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:19.490 19:51:00 -- host/auth.sh@68 -- # keyid=0 00:20:19.490 19:51:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.490 19:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.490 19:51:00 -- common/autotest_common.sh@10 -- # set +x 00:20:19.490 19:51:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.490 19:51:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.490 19:51:00 -- nvmf/common.sh@717 -- # local ip 00:20:19.490 19:51:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.490 19:51:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.490 19:51:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.490 19:51:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.490 19:51:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:19.490 19:51:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.490 19:51:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:19.490 19:51:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:19.490 19:51:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:19.490 19:51:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:19.490 19:51:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.490 19:51:00 -- common/autotest_common.sh@10 -- # set +x 00:20:19.749 nvme0n1 00:20:19.749 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.749 19:51:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.749 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.749 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:19.749 19:51:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.749 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.749 19:51:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.749 19:51:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.749 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.749 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:19.749 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.749 19:51:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:19.749 19:51:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:19.749 19:51:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:19.749 19:51:01 -- host/auth.sh@44 -- # digest=sha512 00:20:19.749 19:51:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:19.749 19:51:01 -- host/auth.sh@44 -- # keyid=1 00:20:19.749 19:51:01 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:19.749 19:51:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:19.749 19:51:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:19.749 19:51:01 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:19.749 19:51:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:20:19.749 19:51:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.749 19:51:01 -- host/auth.sh@68 -- # digest=sha512 00:20:19.749 19:51:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:19.749 19:51:01 -- host/auth.sh@68 -- # keyid=1 00:20:19.749 19:51:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.749 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.749 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:19.749 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.749 19:51:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.749 19:51:01 -- nvmf/common.sh@717 -- # local ip 00:20:19.749 19:51:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.749 19:51:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.749 19:51:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.749 19:51:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.749 19:51:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:19.749 19:51:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.749 19:51:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:19.749 19:51:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:19.749 19:51:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:19.749 19:51:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:19.749 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.749 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.007 nvme0n1 00:20:20.007 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.007 19:51:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.007 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.007 19:51:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.007 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.007 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.007 19:51:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.007 19:51:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.007 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.007 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.007 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.007 19:51:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.007 19:51:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:20.007 19:51:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.007 19:51:01 -- host/auth.sh@44 -- # digest=sha512 00:20:20.007 19:51:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:20.007 19:51:01 -- host/auth.sh@44 -- # keyid=2 00:20:20.007 19:51:01 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:20.007 19:51:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:20.007 19:51:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:20.007 19:51:01 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:20.007 19:51:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:20:20.007 19:51:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.007 19:51:01 -- host/auth.sh@68 -- # digest=sha512 00:20:20.007 19:51:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:20.007 19:51:01 -- host/auth.sh@68 -- # keyid=2 00:20:20.007 19:51:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:20.007 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.007 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.007 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.007 19:51:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.007 19:51:01 -- nvmf/common.sh@717 -- # local ip 00:20:20.007 19:51:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.007 19:51:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.007 19:51:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.007 19:51:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.007 19:51:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.007 19:51:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.007 19:51:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.007 19:51:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.007 19:51:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.007 19:51:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:20.007 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.007 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.265 nvme0n1 00:20:20.265 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.265 19:51:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.265 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.265 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.265 19:51:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.265 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.265 19:51:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.265 19:51:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.265 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.266 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.266 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.266 19:51:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.266 19:51:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:20.266 19:51:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.266 19:51:01 -- host/auth.sh@44 -- # digest=sha512 00:20:20.266 19:51:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:20.266 19:51:01 -- host/auth.sh@44 -- # keyid=3 00:20:20.266 19:51:01 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:20.266 19:51:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:20.266 19:51:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:20.266 19:51:01 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:20.266 19:51:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:20:20.266 19:51:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.266 19:51:01 -- host/auth.sh@68 -- # digest=sha512 00:20:20.266 19:51:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:20.266 19:51:01 -- host/auth.sh@68 -- # keyid=3 00:20:20.266 19:51:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:20.266 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.266 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.266 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.266 19:51:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.266 19:51:01 -- nvmf/common.sh@717 -- # local ip 00:20:20.266 19:51:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.266 19:51:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.266 19:51:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.266 19:51:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.266 19:51:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.266 19:51:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.266 19:51:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.266 19:51:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.266 19:51:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.266 19:51:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:20.266 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.266 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.266 nvme0n1 00:20:20.266 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.266 19:51:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.266 19:51:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.266 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.266 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.266 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.524 19:51:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.524 19:51:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.524 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.524 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.524 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.524 19:51:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.524 19:51:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:20.524 19:51:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.524 19:51:01 -- host/auth.sh@44 -- # digest=sha512 00:20:20.524 19:51:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:20.524 19:51:01 -- host/auth.sh@44 -- # keyid=4 00:20:20.524 19:51:01 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:20.524 19:51:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:20.524 19:51:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:20.524 19:51:01 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:20.524 19:51:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:20:20.524 19:51:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.524 19:51:01 -- host/auth.sh@68 -- # digest=sha512 00:20:20.524 19:51:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:20.524 19:51:01 -- host/auth.sh@68 -- # keyid=4 00:20:20.524 19:51:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:20.524 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.524 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.524 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.524 19:51:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.524 19:51:01 -- nvmf/common.sh@717 -- # local ip 00:20:20.524 19:51:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.524 19:51:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.524 19:51:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.524 19:51:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.524 19:51:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.524 19:51:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.524 19:51:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.524 19:51:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.524 19:51:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.524 19:51:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:20.524 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.524 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.524 nvme0n1 00:20:20.524 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.524 19:51:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.524 19:51:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.524 19:51:01 -- common/autotest_common.sh@10 -- # set +x 00:20:20.524 19:51:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.524 19:51:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.524 19:51:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.524 19:51:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.524 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.524 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.782 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.783 19:51:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.783 19:51:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.783 19:51:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:20.783 19:51:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.783 19:51:02 -- host/auth.sh@44 -- # digest=sha512 00:20:20.783 19:51:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:20.783 19:51:02 -- host/auth.sh@44 -- # keyid=0 00:20:20.783 19:51:02 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:20.783 19:51:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:20.783 19:51:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:20.783 19:51:02 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:20.783 19:51:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:20:20.783 19:51:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.783 19:51:02 -- host/auth.sh@68 -- # digest=sha512 00:20:20.783 19:51:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:20.783 19:51:02 -- host/auth.sh@68 -- # keyid=0 00:20:20.783 19:51:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:20.783 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.783 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.783 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.783 19:51:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.783 19:51:02 -- nvmf/common.sh@717 -- # local ip 00:20:20.783 19:51:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.783 19:51:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.783 19:51:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.783 19:51:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.783 19:51:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.783 19:51:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.783 19:51:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.783 19:51:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.783 19:51:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.783 19:51:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:20.783 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.783 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.783 nvme0n1 00:20:20.783 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.783 19:51:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.783 19:51:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.783 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.783 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.783 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.783 19:51:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.783 19:51:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.783 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.783 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.783 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.783 19:51:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.783 19:51:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:20.783 19:51:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.783 19:51:02 -- host/auth.sh@44 -- # digest=sha512 00:20:20.783 19:51:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:20.783 19:51:02 -- host/auth.sh@44 -- # keyid=1 00:20:20.783 19:51:02 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:20.783 19:51:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:20.783 19:51:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:20.783 19:51:02 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:20.783 19:51:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:20:20.783 19:51:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.783 19:51:02 -- host/auth.sh@68 -- # digest=sha512 00:20:20.783 19:51:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:20.783 19:51:02 -- host/auth.sh@68 -- # keyid=1 00:20:20.783 19:51:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:20.783 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.783 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.041 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.041 19:51:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.041 19:51:02 -- nvmf/common.sh@717 -- # local ip 00:20:21.041 19:51:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.041 19:51:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.041 19:51:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.041 19:51:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.041 19:51:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:21.041 19:51:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.041 19:51:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:21.041 19:51:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:21.041 19:51:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:21.041 19:51:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:21.041 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.041 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.041 nvme0n1 00:20:21.041 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.041 19:51:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.041 19:51:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.041 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.041 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.041 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.041 19:51:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.041 19:51:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.041 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.041 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.041 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.041 19:51:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:21.041 19:51:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:21.041 19:51:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:21.041 19:51:02 -- host/auth.sh@44 -- # digest=sha512 00:20:21.041 19:51:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:21.041 19:51:02 -- host/auth.sh@44 -- # keyid=2 00:20:21.041 19:51:02 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:21.041 19:51:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:21.041 19:51:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:21.041 19:51:02 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:21.041 19:51:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:20:21.041 19:51:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:21.041 19:51:02 -- host/auth.sh@68 -- # digest=sha512 00:20:21.041 19:51:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:21.041 19:51:02 -- host/auth.sh@68 -- # keyid=2 00:20:21.041 19:51:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:21.041 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.041 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.299 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.299 19:51:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.299 19:51:02 -- nvmf/common.sh@717 -- # local ip 00:20:21.299 19:51:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.299 19:51:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.299 19:51:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.299 19:51:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.299 19:51:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:21.299 19:51:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.299 19:51:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:21.299 19:51:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:21.299 19:51:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:21.299 19:51:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:21.299 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.299 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.299 nvme0n1 00:20:21.299 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.299 19:51:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.299 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.299 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.299 19:51:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.299 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.299 19:51:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.299 19:51:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.299 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.299 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.299 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.299 19:51:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:21.299 19:51:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:21.299 19:51:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:21.299 19:51:02 -- host/auth.sh@44 -- # digest=sha512 00:20:21.299 19:51:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:21.299 19:51:02 -- host/auth.sh@44 -- # keyid=3 00:20:21.299 19:51:02 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:21.299 19:51:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:21.299 19:51:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:21.299 19:51:02 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:21.299 19:51:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:20:21.299 19:51:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:21.299 19:51:02 -- host/auth.sh@68 -- # digest=sha512 00:20:21.299 19:51:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:21.299 19:51:02 -- host/auth.sh@68 -- # keyid=3 00:20:21.299 19:51:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:21.299 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.557 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.557 19:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.557 19:51:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.557 19:51:02 -- nvmf/common.sh@717 -- # local ip 00:20:21.557 19:51:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.557 19:51:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.557 19:51:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.557 19:51:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.557 19:51:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:21.557 19:51:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.557 19:51:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:21.557 19:51:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:21.557 19:51:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:21.557 19:51:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:21.557 19:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.557 19:51:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.557 nvme0n1 00:20:21.557 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.557 19:51:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.557 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.557 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.557 19:51:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.557 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.557 19:51:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.557 19:51:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.557 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.557 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.815 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.815 19:51:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:21.815 19:51:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:21.816 19:51:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:21.816 19:51:03 -- host/auth.sh@44 -- # digest=sha512 00:20:21.816 19:51:03 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:21.816 19:51:03 -- host/auth.sh@44 -- # keyid=4 00:20:21.816 19:51:03 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:21.816 19:51:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:21.816 19:51:03 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:21.816 19:51:03 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:21.816 19:51:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:20:21.816 19:51:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:21.816 19:51:03 -- host/auth.sh@68 -- # digest=sha512 00:20:21.816 19:51:03 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:21.816 19:51:03 -- host/auth.sh@68 -- # keyid=4 00:20:21.816 19:51:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:21.816 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.816 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.816 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.816 19:51:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.816 19:51:03 -- nvmf/common.sh@717 -- # local ip 00:20:21.816 19:51:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.816 19:51:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.816 19:51:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.816 19:51:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.816 19:51:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:21.816 19:51:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.816 19:51:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:21.816 19:51:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:21.816 19:51:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:21.816 19:51:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:21.816 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.816 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.816 nvme0n1 00:20:21.816 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.816 19:51:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.816 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.816 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.816 19:51:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.816 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.816 19:51:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.816 19:51:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.816 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.816 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:22.074 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.074 19:51:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.074 19:51:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:22.074 19:51:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:22.074 19:51:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:22.074 19:51:03 -- host/auth.sh@44 -- # digest=sha512 00:20:22.074 19:51:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:22.074 19:51:03 -- host/auth.sh@44 -- # keyid=0 00:20:22.074 19:51:03 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:22.074 19:51:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:22.074 19:51:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:22.074 19:51:03 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:22.074 19:51:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:20:22.074 19:51:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:22.074 19:51:03 -- host/auth.sh@68 -- # digest=sha512 00:20:22.074 19:51:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:22.074 19:51:03 -- host/auth.sh@68 -- # keyid=0 00:20:22.074 19:51:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.074 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.074 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:22.074 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.074 19:51:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:22.074 19:51:03 -- nvmf/common.sh@717 -- # local ip 00:20:22.074 19:51:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:22.074 19:51:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:22.074 19:51:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.074 19:51:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.074 19:51:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:22.074 19:51:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.074 19:51:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:22.074 19:51:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:22.074 19:51:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:22.074 19:51:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:22.074 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.074 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:22.332 nvme0n1 00:20:22.332 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.332 19:51:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.332 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.332 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:22.332 19:51:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:22.332 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.332 19:51:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.332 19:51:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.332 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.332 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:22.332 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.332 19:51:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:22.332 19:51:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:22.332 19:51:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:22.332 19:51:03 -- host/auth.sh@44 -- # digest=sha512 00:20:22.332 19:51:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:22.332 19:51:03 -- host/auth.sh@44 -- # keyid=1 00:20:22.332 19:51:03 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:22.332 19:51:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:22.332 19:51:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:22.332 19:51:03 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:22.332 19:51:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:20:22.332 19:51:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:22.332 19:51:03 -- host/auth.sh@68 -- # digest=sha512 00:20:22.332 19:51:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:22.332 19:51:03 -- host/auth.sh@68 -- # keyid=1 00:20:22.332 19:51:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.332 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.332 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:22.332 19:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.332 19:51:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:22.332 19:51:03 -- nvmf/common.sh@717 -- # local ip 00:20:22.332 19:51:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:22.332 19:51:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:22.332 19:51:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.332 19:51:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.332 19:51:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:22.332 19:51:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.332 19:51:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:22.332 19:51:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:22.332 19:51:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:22.332 19:51:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:22.332 19:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.332 19:51:03 -- common/autotest_common.sh@10 -- # set +x 00:20:22.590 nvme0n1 00:20:22.590 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.590 19:51:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.590 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.590 19:51:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:22.590 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.590 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.590 19:51:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.590 19:51:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.590 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.590 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.590 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.590 19:51:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:22.590 19:51:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:22.590 19:51:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:22.590 19:51:04 -- host/auth.sh@44 -- # digest=sha512 00:20:22.590 19:51:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:22.590 19:51:04 -- host/auth.sh@44 -- # keyid=2 00:20:22.590 19:51:04 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:22.590 19:51:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:22.590 19:51:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:22.590 19:51:04 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:22.590 19:51:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:20:22.590 19:51:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:22.590 19:51:04 -- host/auth.sh@68 -- # digest=sha512 00:20:22.590 19:51:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:22.590 19:51:04 -- host/auth.sh@68 -- # keyid=2 00:20:22.590 19:51:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:22.591 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.591 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.591 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.591 19:51:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:22.591 19:51:04 -- nvmf/common.sh@717 -- # local ip 00:20:22.591 19:51:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:22.591 19:51:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:22.591 19:51:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.591 19:51:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.591 19:51:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:22.591 19:51:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.591 19:51:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:22.591 19:51:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:22.591 19:51:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:22.591 19:51:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:22.591 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.591 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:23.158 nvme0n1 00:20:23.158 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.158 19:51:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.158 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.158 19:51:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:23.158 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:23.158 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.158 19:51:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.158 19:51:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.158 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.158 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:23.158 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.158 19:51:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:23.158 19:51:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:23.158 19:51:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:23.158 19:51:04 -- host/auth.sh@44 -- # digest=sha512 00:20:23.158 19:51:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:23.158 19:51:04 -- host/auth.sh@44 -- # keyid=3 00:20:23.158 19:51:04 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:23.158 19:51:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:23.158 19:51:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:23.158 19:51:04 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:23.158 19:51:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:20:23.158 19:51:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:23.158 19:51:04 -- host/auth.sh@68 -- # digest=sha512 00:20:23.158 19:51:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:23.158 19:51:04 -- host/auth.sh@68 -- # keyid=3 00:20:23.158 19:51:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:23.158 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.158 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:23.158 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.158 19:51:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:23.158 19:51:04 -- nvmf/common.sh@717 -- # local ip 00:20:23.158 19:51:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.158 19:51:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.158 19:51:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.158 19:51:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.158 19:51:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:23.158 19:51:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.158 19:51:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:23.158 19:51:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:23.158 19:51:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:23.158 19:51:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:23.158 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.158 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:23.417 nvme0n1 00:20:23.417 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.417 19:51:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:23.417 19:51:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.417 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.417 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:23.417 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.417 19:51:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.417 19:51:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.417 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.417 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:23.417 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.417 19:51:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:23.417 19:51:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:23.417 19:51:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:23.417 19:51:04 -- host/auth.sh@44 -- # digest=sha512 00:20:23.418 19:51:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:23.418 19:51:04 -- host/auth.sh@44 -- # keyid=4 00:20:23.418 19:51:04 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:23.418 19:51:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:23.418 19:51:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:23.418 19:51:04 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:23.418 19:51:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:20:23.418 19:51:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:23.418 19:51:04 -- host/auth.sh@68 -- # digest=sha512 00:20:23.418 19:51:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:23.418 19:51:04 -- host/auth.sh@68 -- # keyid=4 00:20:23.418 19:51:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:23.418 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.418 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:23.418 19:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.418 19:51:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:23.418 19:51:04 -- nvmf/common.sh@717 -- # local ip 00:20:23.418 19:51:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.418 19:51:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.418 19:51:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.418 19:51:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.418 19:51:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:23.418 19:51:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.418 19:51:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:23.418 19:51:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:23.418 19:51:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:23.418 19:51:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:23.418 19:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.418 19:51:04 -- common/autotest_common.sh@10 -- # set +x 00:20:23.675 nvme0n1 00:20:23.675 19:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.675 19:51:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.675 19:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.675 19:51:05 -- common/autotest_common.sh@10 -- # set +x 00:20:23.675 19:51:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:23.675 19:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.675 19:51:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.676 19:51:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.676 19:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.676 19:51:05 -- common/autotest_common.sh@10 -- # set +x 00:20:23.676 19:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.676 19:51:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.676 19:51:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:23.676 19:51:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:23.676 19:51:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:23.676 19:51:05 -- host/auth.sh@44 -- # digest=sha512 00:20:23.676 19:51:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:23.676 19:51:05 -- host/auth.sh@44 -- # keyid=0 00:20:23.676 19:51:05 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:23.676 19:51:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:23.676 19:51:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:23.676 19:51:05 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:23.676 19:51:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:20:23.676 19:51:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:23.676 19:51:05 -- host/auth.sh@68 -- # digest=sha512 00:20:23.676 19:51:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:23.676 19:51:05 -- host/auth.sh@68 -- # keyid=0 00:20:23.676 19:51:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:23.676 19:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.676 19:51:05 -- common/autotest_common.sh@10 -- # set +x 00:20:23.676 19:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.676 19:51:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:23.676 19:51:05 -- nvmf/common.sh@717 -- # local ip 00:20:23.676 19:51:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:23.676 19:51:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:23.676 19:51:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.676 19:51:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.676 19:51:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:23.676 19:51:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.676 19:51:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:23.676 19:51:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:23.676 19:51:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:23.676 19:51:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:23.676 19:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.676 19:51:05 -- common/autotest_common.sh@10 -- # set +x 00:20:24.242 nvme0n1 00:20:24.242 19:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.242 19:51:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.242 19:51:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:24.242 19:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.242 19:51:05 -- common/autotest_common.sh@10 -- # set +x 00:20:24.242 19:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.242 19:51:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.242 19:51:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.242 19:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.242 19:51:05 -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 19:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.500 19:51:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:24.500 19:51:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:24.500 19:51:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:24.500 19:51:05 -- host/auth.sh@44 -- # digest=sha512 00:20:24.500 19:51:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:24.500 19:51:05 -- host/auth.sh@44 -- # keyid=1 00:20:24.500 19:51:05 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:24.500 19:51:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:24.500 19:51:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:24.500 19:51:05 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:24.500 19:51:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:20:24.500 19:51:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.500 19:51:05 -- host/auth.sh@68 -- # digest=sha512 00:20:24.500 19:51:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:24.500 19:51:05 -- host/auth.sh@68 -- # keyid=1 00:20:24.500 19:51:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:24.500 19:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.500 19:51:05 -- common/autotest_common.sh@10 -- # set +x 00:20:24.500 19:51:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.500 19:51:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.500 19:51:05 -- nvmf/common.sh@717 -- # local ip 00:20:24.500 19:51:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.500 19:51:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.500 19:51:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.500 19:51:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.500 19:51:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:24.500 19:51:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.500 19:51:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:24.500 19:51:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:24.500 19:51:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:24.500 19:51:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:24.500 19:51:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.500 19:51:05 -- common/autotest_common.sh@10 -- # set +x 00:20:25.065 nvme0n1 00:20:25.065 19:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.065 19:51:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.065 19:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.065 19:51:06 -- common/autotest_common.sh@10 -- # set +x 00:20:25.065 19:51:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.065 19:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.065 19:51:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.065 19:51:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.065 19:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.065 19:51:06 -- common/autotest_common.sh@10 -- # set +x 00:20:25.065 19:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.065 19:51:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.065 19:51:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:25.065 19:51:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.065 19:51:06 -- host/auth.sh@44 -- # digest=sha512 00:20:25.065 19:51:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:25.065 19:51:06 -- host/auth.sh@44 -- # keyid=2 00:20:25.065 19:51:06 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:25.065 19:51:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:25.065 19:51:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:25.065 19:51:06 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:25.065 19:51:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:20:25.065 19:51:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.065 19:51:06 -- host/auth.sh@68 -- # digest=sha512 00:20:25.065 19:51:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:25.065 19:51:06 -- host/auth.sh@68 -- # keyid=2 00:20:25.065 19:51:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.065 19:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.065 19:51:06 -- common/autotest_common.sh@10 -- # set +x 00:20:25.065 19:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.065 19:51:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.065 19:51:06 -- nvmf/common.sh@717 -- # local ip 00:20:25.065 19:51:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.065 19:51:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.065 19:51:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.065 19:51:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.065 19:51:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:25.065 19:51:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.065 19:51:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:25.065 19:51:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:25.065 19:51:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:25.065 19:51:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:25.065 19:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.065 19:51:06 -- common/autotest_common.sh@10 -- # set +x 00:20:25.629 nvme0n1 00:20:25.629 19:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.629 19:51:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.629 19:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.629 19:51:06 -- common/autotest_common.sh@10 -- # set +x 00:20:25.629 19:51:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.629 19:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.629 19:51:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.629 19:51:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.629 19:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.629 19:51:06 -- common/autotest_common.sh@10 -- # set +x 00:20:25.629 19:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.629 19:51:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.629 19:51:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:25.629 19:51:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.629 19:51:06 -- host/auth.sh@44 -- # digest=sha512 00:20:25.629 19:51:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:25.629 19:51:06 -- host/auth.sh@44 -- # keyid=3 00:20:25.629 19:51:06 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:25.629 19:51:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:25.629 19:51:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:25.629 19:51:06 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:25.629 19:51:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:20:25.629 19:51:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.629 19:51:06 -- host/auth.sh@68 -- # digest=sha512 00:20:25.629 19:51:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:25.629 19:51:06 -- host/auth.sh@68 -- # keyid=3 00:20:25.629 19:51:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.629 19:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.629 19:51:06 -- common/autotest_common.sh@10 -- # set +x 00:20:25.629 19:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.629 19:51:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.629 19:51:07 -- nvmf/common.sh@717 -- # local ip 00:20:25.629 19:51:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.629 19:51:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.629 19:51:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.629 19:51:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.629 19:51:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:25.629 19:51:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.629 19:51:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:25.629 19:51:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:25.629 19:51:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:25.629 19:51:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:25.629 19:51:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.629 19:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:26.195 nvme0n1 00:20:26.195 19:51:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.195 19:51:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.195 19:51:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.195 19:51:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.195 19:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:26.195 19:51:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.195 19:51:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.195 19:51:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.195 19:51:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.195 19:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:26.195 19:51:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.195 19:51:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.195 19:51:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:26.195 19:51:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.195 19:51:07 -- host/auth.sh@44 -- # digest=sha512 00:20:26.195 19:51:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:26.195 19:51:07 -- host/auth.sh@44 -- # keyid=4 00:20:26.195 19:51:07 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:26.195 19:51:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:26.195 19:51:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:26.195 19:51:07 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:26.195 19:51:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:20:26.195 19:51:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.195 19:51:07 -- host/auth.sh@68 -- # digest=sha512 00:20:26.195 19:51:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:26.195 19:51:07 -- host/auth.sh@68 -- # keyid=4 00:20:26.195 19:51:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:26.195 19:51:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.195 19:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:26.195 19:51:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.195 19:51:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.195 19:51:07 -- nvmf/common.sh@717 -- # local ip 00:20:26.195 19:51:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.195 19:51:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.195 19:51:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.195 19:51:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.195 19:51:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:26.195 19:51:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.195 19:51:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:26.195 19:51:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:26.195 19:51:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:26.195 19:51:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:26.195 19:51:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.195 19:51:07 -- common/autotest_common.sh@10 -- # set +x 00:20:26.761 nvme0n1 00:20:26.761 19:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.761 19:51:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.761 19:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.761 19:51:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.761 19:51:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.761 19:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.761 19:51:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.761 19:51:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.761 19:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.761 19:51:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.761 19:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.761 19:51:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.761 19:51:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.761 19:51:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:26.761 19:51:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.761 19:51:08 -- host/auth.sh@44 -- # digest=sha512 00:20:26.761 19:51:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:26.761 19:51:08 -- host/auth.sh@44 -- # keyid=0 00:20:26.761 19:51:08 -- host/auth.sh@45 -- # key=DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:26.761 19:51:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:26.761 19:51:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:26.761 19:51:08 -- host/auth.sh@49 -- # echo DHHC-1:00:MTA1NTJkNGYyNDMwNjM4OWYwNTkxZTgxODgyMDRmMmP9mnyh: 00:20:26.761 19:51:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:20:26.761 19:51:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.761 19:51:08 -- host/auth.sh@68 -- # digest=sha512 00:20:26.761 19:51:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:26.761 19:51:08 -- host/auth.sh@68 -- # keyid=0 00:20:26.761 19:51:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.761 19:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.761 19:51:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.761 19:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.761 19:51:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.761 19:51:08 -- nvmf/common.sh@717 -- # local ip 00:20:26.761 19:51:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.761 19:51:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.761 19:51:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.761 19:51:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.761 19:51:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:26.761 19:51:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.761 19:51:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:26.761 19:51:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:26.761 19:51:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:26.761 19:51:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:26.761 19:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.761 19:51:08 -- common/autotest_common.sh@10 -- # set +x 00:20:27.696 nvme0n1 00:20:27.696 19:51:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.696 19:51:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.696 19:51:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:27.696 19:51:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.696 19:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.696 19:51:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.696 19:51:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.696 19:51:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.696 19:51:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.696 19:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.696 19:51:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.696 19:51:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.696 19:51:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:27.696 19:51:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.696 19:51:09 -- host/auth.sh@44 -- # digest=sha512 00:20:27.696 19:51:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:27.696 19:51:09 -- host/auth.sh@44 -- # keyid=1 00:20:27.696 19:51:09 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:27.696 19:51:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:27.696 19:51:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:27.696 19:51:09 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:27.696 19:51:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:20:27.696 19:51:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.696 19:51:09 -- host/auth.sh@68 -- # digest=sha512 00:20:27.696 19:51:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:27.696 19:51:09 -- host/auth.sh@68 -- # keyid=1 00:20:27.696 19:51:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:27.696 19:51:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.696 19:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.696 19:51:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.696 19:51:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.696 19:51:09 -- nvmf/common.sh@717 -- # local ip 00:20:27.696 19:51:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.696 19:51:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.696 19:51:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.696 19:51:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.696 19:51:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:27.696 19:51:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.696 19:51:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:27.696 19:51:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:27.696 19:51:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:27.696 19:51:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:27.696 19:51:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.696 19:51:09 -- common/autotest_common.sh@10 -- # set +x 00:20:29.070 nvme0n1 00:20:29.070 19:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.070 19:51:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.070 19:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.070 19:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:29.070 19:51:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.070 19:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.070 19:51:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.070 19:51:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.070 19:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.070 19:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:29.070 19:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.070 19:51:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.070 19:51:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:29.070 19:51:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.070 19:51:10 -- host/auth.sh@44 -- # digest=sha512 00:20:29.070 19:51:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:29.070 19:51:10 -- host/auth.sh@44 -- # keyid=2 00:20:29.070 19:51:10 -- host/auth.sh@45 -- # key=DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:29.070 19:51:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:29.070 19:51:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:29.070 19:51:10 -- host/auth.sh@49 -- # echo DHHC-1:01:ZDM3MDk2ZmYwNGY0NDkxOTU4YmE2MWI0Y2Q0MjJhZWMr5rEd: 00:20:29.070 19:51:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:20:29.070 19:51:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.070 19:51:10 -- host/auth.sh@68 -- # digest=sha512 00:20:29.070 19:51:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:29.070 19:51:10 -- host/auth.sh@68 -- # keyid=2 00:20:29.070 19:51:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:29.070 19:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.070 19:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:29.070 19:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.070 19:51:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.070 19:51:10 -- nvmf/common.sh@717 -- # local ip 00:20:29.070 19:51:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.070 19:51:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.070 19:51:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.070 19:51:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.070 19:51:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.070 19:51:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.070 19:51:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.070 19:51:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.070 19:51:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.070 19:51:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:29.070 19:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.070 19:51:10 -- common/autotest_common.sh@10 -- # set +x 00:20:30.005 nvme0n1 00:20:30.005 19:51:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.005 19:51:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.005 19:51:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.005 19:51:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.005 19:51:11 -- common/autotest_common.sh@10 -- # set +x 00:20:30.005 19:51:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.005 19:51:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.005 19:51:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.005 19:51:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.005 19:51:11 -- common/autotest_common.sh@10 -- # set +x 00:20:30.005 19:51:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.005 19:51:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.005 19:51:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:30.005 19:51:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.005 19:51:11 -- host/auth.sh@44 -- # digest=sha512 00:20:30.005 19:51:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:30.005 19:51:11 -- host/auth.sh@44 -- # keyid=3 00:20:30.005 19:51:11 -- host/auth.sh@45 -- # key=DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:30.005 19:51:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:30.005 19:51:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:30.005 19:51:11 -- host/auth.sh@49 -- # echo DHHC-1:02:NDNmZTc3ZjIyZTVkMmI4MjRiNDA0MTAxMmYyZDI4ZGRmYjc4ZDAxNjg3MWYxYTRh4VXQEQ==: 00:20:30.005 19:51:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:20:30.005 19:51:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.005 19:51:11 -- host/auth.sh@68 -- # digest=sha512 00:20:30.005 19:51:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:30.005 19:51:11 -- host/auth.sh@68 -- # keyid=3 00:20:30.005 19:51:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.005 19:51:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.005 19:51:11 -- common/autotest_common.sh@10 -- # set +x 00:20:30.005 19:51:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.005 19:51:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.005 19:51:11 -- nvmf/common.sh@717 -- # local ip 00:20:30.005 19:51:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.005 19:51:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.005 19:51:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.005 19:51:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.005 19:51:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.005 19:51:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.005 19:51:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.005 19:51:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.005 19:51:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.005 19:51:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:30.005 19:51:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.005 19:51:11 -- common/autotest_common.sh@10 -- # set +x 00:20:30.939 nvme0n1 00:20:30.939 19:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.939 19:51:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.939 19:51:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.939 19:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.939 19:51:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.939 19:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.939 19:51:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.939 19:51:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.939 19:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.939 19:51:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.939 19:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.939 19:51:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.939 19:51:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:30.939 19:51:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.939 19:51:12 -- host/auth.sh@44 -- # digest=sha512 00:20:30.939 19:51:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:30.939 19:51:12 -- host/auth.sh@44 -- # keyid=4 00:20:30.939 19:51:12 -- host/auth.sh@45 -- # key=DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:30.939 19:51:12 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:30.939 19:51:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:30.939 19:51:12 -- host/auth.sh@49 -- # echo DHHC-1:03:YWI0M2QwMTVjZThlMzZlYjc2MTdhMThiMDZmZjNiN2E2YzNjZDA4ZGY2MDUxYjk0NGU1YmQ5NGFiMWYzZGJmN5n+6co=: 00:20:30.939 19:51:12 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:20:30.939 19:51:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.939 19:51:12 -- host/auth.sh@68 -- # digest=sha512 00:20:30.939 19:51:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:30.939 19:51:12 -- host/auth.sh@68 -- # keyid=4 00:20:30.939 19:51:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.939 19:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.939 19:51:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.939 19:51:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.939 19:51:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.939 19:51:12 -- nvmf/common.sh@717 -- # local ip 00:20:30.939 19:51:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.939 19:51:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.939 19:51:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.939 19:51:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.939 19:51:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.939 19:51:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.939 19:51:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.939 19:51:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.939 19:51:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.939 19:51:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:30.939 19:51:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.939 19:51:12 -- common/autotest_common.sh@10 -- # set +x 00:20:31.873 nvme0n1 00:20:31.873 19:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.873 19:51:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.873 19:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.873 19:51:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.873 19:51:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.873 19:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.873 19:51:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.873 19:51:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.873 19:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.873 19:51:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.873 19:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.873 19:51:13 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:31.873 19:51:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.873 19:51:13 -- host/auth.sh@44 -- # digest=sha256 00:20:31.873 19:51:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:31.873 19:51:13 -- host/auth.sh@44 -- # keyid=1 00:20:31.873 19:51:13 -- host/auth.sh@45 -- # key=DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:31.873 19:51:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:31.873 19:51:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:31.873 19:51:13 -- host/auth.sh@49 -- # echo DHHC-1:00:N2M1Y2NlY2E3NTkyYmFlMmU4ZDQyYjQyY2Y2ZDk1MTVjOWNlMmUyNDcwYjIyYTk1kttbIA==: 00:20:31.873 19:51:13 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.873 19:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.873 19:51:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.873 19:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.873 19:51:13 -- host/auth.sh@119 -- # get_main_ns_ip 00:20:31.873 19:51:13 -- nvmf/common.sh@717 -- # local ip 00:20:31.873 19:51:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.873 19:51:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.873 19:51:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.873 19:51:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.873 19:51:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.873 19:51:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.873 19:51:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.873 19:51:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.873 19:51:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.873 19:51:13 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:31.873 19:51:13 -- common/autotest_common.sh@638 -- # local es=0 00:20:31.873 19:51:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:31.873 19:51:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:31.873 19:51:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:31.873 19:51:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:31.873 19:51:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:31.873 19:51:13 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:31.873 19:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.873 19:51:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.873 request: 00:20:31.873 { 00:20:31.873 "name": "nvme0", 00:20:31.873 "trtype": "tcp", 00:20:31.873 "traddr": "10.0.0.1", 00:20:31.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:31.873 "adrfam": "ipv4", 00:20:31.873 "trsvcid": "4420", 00:20:31.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:31.873 "method": "bdev_nvme_attach_controller", 00:20:31.873 "req_id": 1 00:20:31.873 } 00:20:31.873 Got JSON-RPC error response 00:20:31.873 response: 00:20:31.873 { 00:20:31.873 "code": -32602, 00:20:31.873 "message": "Invalid parameters" 00:20:31.873 } 00:20:31.873 19:51:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:31.873 19:51:13 -- common/autotest_common.sh@641 -- # es=1 00:20:31.873 19:51:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:31.873 19:51:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:31.873 19:51:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:31.873 19:51:13 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.873 19:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.873 19:51:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.873 19:51:13 -- host/auth.sh@121 -- # jq length 00:20:31.873 19:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.873 19:51:13 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:20:31.873 19:51:13 -- host/auth.sh@124 -- # get_main_ns_ip 00:20:31.873 19:51:13 -- nvmf/common.sh@717 -- # local ip 00:20:31.873 19:51:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.873 19:51:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.873 19:51:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.873 19:51:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.873 19:51:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.873 19:51:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.873 19:51:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.873 19:51:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.873 19:51:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.873 19:51:13 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:31.873 19:51:13 -- common/autotest_common.sh@638 -- # local es=0 00:20:31.873 19:51:13 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:31.873 19:51:13 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:31.873 19:51:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:31.873 19:51:13 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:31.873 19:51:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:31.873 19:51:13 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:31.873 19:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.873 19:51:13 -- common/autotest_common.sh@10 -- # set +x 00:20:32.131 request: 00:20:32.131 { 00:20:32.131 "name": "nvme0", 00:20:32.131 "trtype": "tcp", 00:20:32.131 "traddr": "10.0.0.1", 00:20:32.131 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:32.131 "adrfam": "ipv4", 00:20:32.131 "trsvcid": "4420", 00:20:32.131 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:32.131 "dhchap_key": "key2", 00:20:32.131 "method": "bdev_nvme_attach_controller", 00:20:32.131 "req_id": 1 00:20:32.131 } 00:20:32.131 Got JSON-RPC error response 00:20:32.131 response: 00:20:32.131 { 00:20:32.131 "code": -32602, 00:20:32.131 "message": "Invalid parameters" 00:20:32.131 } 00:20:32.131 19:51:13 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:32.131 19:51:13 -- common/autotest_common.sh@641 -- # es=1 00:20:32.131 19:51:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:32.131 19:51:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:32.131 19:51:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:32.131 19:51:13 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.131 19:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.131 19:51:13 -- host/auth.sh@127 -- # jq length 00:20:32.131 19:51:13 -- common/autotest_common.sh@10 -- # set +x 00:20:32.131 19:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.131 19:51:13 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:20:32.131 19:51:13 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:20:32.131 19:51:13 -- host/auth.sh@130 -- # cleanup 00:20:32.131 19:51:13 -- host/auth.sh@24 -- # nvmftestfini 00:20:32.131 19:51:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:32.131 19:51:13 -- nvmf/common.sh@117 -- # sync 00:20:32.131 19:51:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.131 19:51:13 -- nvmf/common.sh@120 -- # set +e 00:20:32.131 19:51:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.131 19:51:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.131 rmmod nvme_tcp 00:20:32.131 rmmod nvme_fabrics 00:20:32.131 19:51:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.131 19:51:13 -- nvmf/common.sh@124 -- # set -e 00:20:32.131 19:51:13 -- nvmf/common.sh@125 -- # return 0 00:20:32.131 19:51:13 -- nvmf/common.sh@478 -- # '[' -n 1759577 ']' 00:20:32.131 19:51:13 -- nvmf/common.sh@479 -- # killprocess 1759577 00:20:32.131 19:51:13 -- common/autotest_common.sh@936 -- # '[' -z 1759577 ']' 00:20:32.131 19:51:13 -- common/autotest_common.sh@940 -- # kill -0 1759577 00:20:32.131 19:51:13 -- common/autotest_common.sh@941 -- # uname 00:20:32.131 19:51:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.131 19:51:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1759577 00:20:32.131 19:51:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:32.131 19:51:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:32.131 19:51:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1759577' 00:20:32.131 killing process with pid 1759577 00:20:32.131 19:51:13 -- common/autotest_common.sh@955 -- # kill 1759577 00:20:32.131 19:51:13 -- common/autotest_common.sh@960 -- # wait 1759577 00:20:32.390 19:51:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:32.390 19:51:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:32.390 19:51:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:32.390 19:51:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.390 19:51:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.390 19:51:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.390 19:51:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.390 19:51:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.924 19:51:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.924 19:51:15 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:34.924 19:51:15 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:34.924 19:51:15 -- host/auth.sh@27 -- # clean_kernel_target 00:20:34.924 19:51:15 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:34.924 19:51:15 -- nvmf/common.sh@675 -- # echo 0 00:20:34.924 19:51:15 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:34.924 19:51:15 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:34.924 19:51:15 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:34.924 19:51:15 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:34.924 19:51:15 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:34.924 19:51:15 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:20:34.924 19:51:15 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:35.859 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:35.859 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:35.859 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:35.859 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:35.859 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:35.859 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:35.859 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:35.859 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:35.859 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:35.859 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:35.859 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:35.859 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:35.859 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:35.859 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:35.859 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:35.859 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:36.794 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:20:36.794 19:51:18 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Cpy /tmp/spdk.key-null.7dl /tmp/spdk.key-sha256.ZNa /tmp/spdk.key-sha384.hOC /tmp/spdk.key-sha512.5LO /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:20:36.794 19:51:18 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:38.208 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:38.208 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:38.208 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:38.208 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:38.208 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:38.208 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:38.208 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:38.208 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:38.208 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:38.208 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:38.208 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:38.208 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:38.208 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:38.208 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:38.208 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:38.208 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:38.208 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:38.208 00:20:38.208 real 0m49.091s 00:20:38.208 user 0m46.698s 00:20:38.208 sys 0m5.664s 00:20:38.208 19:51:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:38.208 19:51:19 -- common/autotest_common.sh@10 -- # set +x 00:20:38.208 ************************************ 00:20:38.208 END TEST nvmf_auth 00:20:38.208 ************************************ 00:20:38.208 19:51:19 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:20:38.208 19:51:19 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:38.208 19:51:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:38.208 19:51:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.208 19:51:19 -- common/autotest_common.sh@10 -- # set +x 00:20:38.208 ************************************ 00:20:38.208 START TEST nvmf_digest 00:20:38.208 ************************************ 00:20:38.208 19:51:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:38.208 * Looking for test storage... 00:20:38.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:38.208 19:51:19 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.208 19:51:19 -- nvmf/common.sh@7 -- # uname -s 00:20:38.208 19:51:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.208 19:51:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.208 19:51:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.208 19:51:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.208 19:51:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.208 19:51:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.208 19:51:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.208 19:51:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.208 19:51:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.208 19:51:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.467 19:51:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.467 19:51:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.467 19:51:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.467 19:51:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.467 19:51:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.467 19:51:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.467 19:51:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.467 19:51:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.467 19:51:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.467 19:51:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.467 19:51:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.467 19:51:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.467 19:51:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.467 19:51:19 -- paths/export.sh@5 -- # export PATH 00:20:38.467 19:51:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.467 19:51:19 -- nvmf/common.sh@47 -- # : 0 00:20:38.467 19:51:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:38.467 19:51:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:38.467 19:51:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.467 19:51:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.467 19:51:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.467 19:51:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:38.467 19:51:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:38.467 19:51:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:38.467 19:51:19 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:38.467 19:51:19 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:38.467 19:51:19 -- host/digest.sh@16 -- # runtime=2 00:20:38.467 19:51:19 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:38.467 19:51:19 -- host/digest.sh@138 -- # nvmftestinit 00:20:38.467 19:51:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:38.467 19:51:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.467 19:51:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:38.467 19:51:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:38.467 19:51:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:38.467 19:51:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.467 19:51:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.467 19:51:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.467 19:51:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:38.467 19:51:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:38.467 19:51:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:38.467 19:51:19 -- common/autotest_common.sh@10 -- # set +x 00:20:40.372 19:51:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:40.372 19:51:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:40.372 19:51:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:40.372 19:51:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:40.372 19:51:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:40.372 19:51:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:40.372 19:51:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:40.372 19:51:21 -- nvmf/common.sh@295 -- # net_devs=() 00:20:40.372 19:51:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:40.372 19:51:21 -- nvmf/common.sh@296 -- # e810=() 00:20:40.372 19:51:21 -- nvmf/common.sh@296 -- # local -ga e810 00:20:40.372 19:51:21 -- nvmf/common.sh@297 -- # x722=() 00:20:40.372 19:51:21 -- nvmf/common.sh@297 -- # local -ga x722 00:20:40.372 19:51:21 -- nvmf/common.sh@298 -- # mlx=() 00:20:40.372 19:51:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:40.372 19:51:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.372 19:51:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:40.372 19:51:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:40.372 19:51:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:40.372 19:51:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.372 19:51:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:40.372 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:40.372 19:51:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.372 19:51:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:40.372 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:40.372 19:51:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:40.372 19:51:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:40.372 19:51:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.372 19:51:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.373 19:51:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:40.373 19:51:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.373 19:51:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:40.373 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:40.373 19:51:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.373 19:51:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.373 19:51:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.373 19:51:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:40.373 19:51:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.373 19:51:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:40.373 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:40.373 19:51:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.373 19:51:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:40.373 19:51:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:40.373 19:51:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:40.373 19:51:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:40.373 19:51:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:40.373 19:51:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.373 19:51:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.373 19:51:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.373 19:51:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:40.373 19:51:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.373 19:51:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.373 19:51:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:40.373 19:51:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.373 19:51:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.373 19:51:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:40.373 19:51:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:40.373 19:51:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.373 19:51:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.373 19:51:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.373 19:51:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.373 19:51:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:40.373 19:51:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.631 19:51:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.631 19:51:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.631 19:51:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:40.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:20:40.631 00:20:40.631 --- 10.0.0.2 ping statistics --- 00:20:40.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.631 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:20:40.631 19:51:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:20:40.631 00:20:40.631 --- 10.0.0.1 ping statistics --- 00:20:40.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.631 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:20:40.631 19:51:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.631 19:51:21 -- nvmf/common.sh@411 -- # return 0 00:20:40.631 19:51:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:40.631 19:51:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.631 19:51:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:40.631 19:51:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:40.631 19:51:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.631 19:51:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:40.631 19:51:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:40.631 19:51:21 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:40.631 19:51:21 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:40.631 19:51:21 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:40.631 19:51:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:40.631 19:51:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:40.631 19:51:21 -- common/autotest_common.sh@10 -- # set +x 00:20:40.631 ************************************ 00:20:40.631 START TEST nvmf_digest_clean 00:20:40.631 ************************************ 00:20:40.631 19:51:22 -- common/autotest_common.sh@1111 -- # run_digest 00:20:40.631 19:51:22 -- host/digest.sh@120 -- # local dsa_initiator 00:20:40.631 19:51:22 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:40.631 19:51:22 -- host/digest.sh@121 -- # dsa_initiator=false 00:20:40.631 19:51:22 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:40.631 19:51:22 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:40.631 19:51:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:40.631 19:51:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:40.631 19:51:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.631 19:51:22 -- nvmf/common.sh@470 -- # nvmfpid=1769632 00:20:40.631 19:51:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:40.631 19:51:22 -- nvmf/common.sh@471 -- # waitforlisten 1769632 00:20:40.631 19:51:22 -- common/autotest_common.sh@817 -- # '[' -z 1769632 ']' 00:20:40.631 19:51:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.631 19:51:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:40.631 19:51:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.631 19:51:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:40.631 19:51:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.631 [2024-04-24 19:51:22.102845] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:20:40.631 [2024-04-24 19:51:22.102918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.631 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.890 [2024-04-24 19:51:22.168671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.890 [2024-04-24 19:51:22.274216] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.890 [2024-04-24 19:51:22.274287] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.890 [2024-04-24 19:51:22.274311] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.890 [2024-04-24 19:51:22.274323] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.890 [2024-04-24 19:51:22.274333] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.890 [2024-04-24 19:51:22.274362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.890 19:51:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:40.890 19:51:22 -- common/autotest_common.sh@850 -- # return 0 00:20:40.890 19:51:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:40.890 19:51:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:40.890 19:51:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.890 19:51:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.890 19:51:22 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:40.890 19:51:22 -- host/digest.sh@126 -- # common_target_config 00:20:40.890 19:51:22 -- host/digest.sh@43 -- # rpc_cmd 00:20:40.890 19:51:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.890 19:51:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.149 null0 00:20:41.149 [2024-04-24 19:51:22.437084] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.149 [2024-04-24 19:51:22.461301] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.149 19:51:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.149 19:51:22 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:41.149 19:51:22 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:41.149 19:51:22 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:41.149 19:51:22 -- host/digest.sh@80 -- # rw=randread 00:20:41.149 19:51:22 -- host/digest.sh@80 -- # bs=4096 00:20:41.149 19:51:22 -- host/digest.sh@80 -- # qd=128 00:20:41.149 19:51:22 -- host/digest.sh@80 -- # scan_dsa=false 00:20:41.149 19:51:22 -- host/digest.sh@83 -- # bperfpid=1769653 00:20:41.149 19:51:22 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:41.149 19:51:22 -- host/digest.sh@84 -- # waitforlisten 1769653 /var/tmp/bperf.sock 00:20:41.149 19:51:22 -- common/autotest_common.sh@817 -- # '[' -z 1769653 ']' 00:20:41.149 19:51:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:41.149 19:51:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:41.149 19:51:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:41.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:41.149 19:51:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:41.149 19:51:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.149 [2024-04-24 19:51:22.510423] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:20:41.149 [2024-04-24 19:51:22.510509] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769653 ] 00:20:41.149 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.149 [2024-04-24 19:51:22.577836] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.407 [2024-04-24 19:51:22.698654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.407 19:51:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:41.407 19:51:22 -- common/autotest_common.sh@850 -- # return 0 00:20:41.407 19:51:22 -- host/digest.sh@86 -- # false 00:20:41.407 19:51:22 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:41.407 19:51:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:41.666 19:51:23 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:41.666 19:51:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:42.231 nvme0n1 00:20:42.231 19:51:23 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:42.231 19:51:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:42.231 Running I/O for 2 seconds... 00:20:44.761 00:20:44.761 Latency(us) 00:20:44.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.761 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:44.761 nvme0n1 : 2.01 11654.24 45.52 0.00 0.00 10966.32 4296.25 26408.58 00:20:44.761 =================================================================================================================== 00:20:44.761 Total : 11654.24 45.52 0.00 0.00 10966.32 4296.25 26408.58 00:20:44.761 0 00:20:44.761 19:51:25 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:44.761 19:51:25 -- host/digest.sh@93 -- # get_accel_stats 00:20:44.761 19:51:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:44.761 19:51:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:44.761 19:51:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:44.761 | select(.opcode=="crc32c") 00:20:44.761 | "\(.module_name) \(.executed)"' 00:20:44.761 19:51:25 -- host/digest.sh@94 -- # false 00:20:44.761 19:51:25 -- host/digest.sh@94 -- # exp_module=software 00:20:44.761 19:51:25 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:44.761 19:51:25 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:44.761 19:51:25 -- host/digest.sh@98 -- # killprocess 1769653 00:20:44.761 19:51:25 -- common/autotest_common.sh@936 -- # '[' -z 1769653 ']' 00:20:44.761 19:51:25 -- common/autotest_common.sh@940 -- # kill -0 1769653 00:20:44.761 19:51:25 -- common/autotest_common.sh@941 -- # uname 00:20:44.761 19:51:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:44.761 19:51:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1769653 00:20:44.761 19:51:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:44.761 19:51:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:44.761 19:51:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1769653' 00:20:44.761 killing process with pid 1769653 00:20:44.761 19:51:26 -- common/autotest_common.sh@955 -- # kill 1769653 00:20:44.761 Received shutdown signal, test time was about 2.000000 seconds 00:20:44.761 00:20:44.761 Latency(us) 00:20:44.761 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.761 =================================================================================================================== 00:20:44.761 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.761 19:51:26 -- common/autotest_common.sh@960 -- # wait 1769653 00:20:45.020 19:51:26 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:45.020 19:51:26 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:45.020 19:51:26 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:45.020 19:51:26 -- host/digest.sh@80 -- # rw=randread 00:20:45.020 19:51:26 -- host/digest.sh@80 -- # bs=131072 00:20:45.020 19:51:26 -- host/digest.sh@80 -- # qd=16 00:20:45.020 19:51:26 -- host/digest.sh@80 -- # scan_dsa=false 00:20:45.020 19:51:26 -- host/digest.sh@83 -- # bperfpid=1770180 00:20:45.020 19:51:26 -- host/digest.sh@84 -- # waitforlisten 1770180 /var/tmp/bperf.sock 00:20:45.020 19:51:26 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:45.020 19:51:26 -- common/autotest_common.sh@817 -- # '[' -z 1770180 ']' 00:20:45.020 19:51:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:45.020 19:51:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:45.020 19:51:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:45.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:45.020 19:51:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:45.020 19:51:26 -- common/autotest_common.sh@10 -- # set +x 00:20:45.020 [2024-04-24 19:51:26.321611] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:20:45.020 [2024-04-24 19:51:26.321715] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770180 ] 00:20:45.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:45.020 Zero copy mechanism will not be used. 00:20:45.020 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.020 [2024-04-24 19:51:26.383398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.020 [2024-04-24 19:51:26.500360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.020 19:51:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:45.020 19:51:26 -- common/autotest_common.sh@850 -- # return 0 00:20:45.020 19:51:26 -- host/digest.sh@86 -- # false 00:20:45.020 19:51:26 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:45.020 19:51:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:45.587 19:51:26 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:45.587 19:51:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:45.845 nvme0n1 00:20:45.845 19:51:27 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:45.845 19:51:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:46.103 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:46.103 Zero copy mechanism will not be used. 00:20:46.103 Running I/O for 2 seconds... 00:20:48.003 00:20:48.003 Latency(us) 00:20:48.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.003 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:48.003 nvme0n1 : 2.00 2333.49 291.69 0.00 0.00 6853.32 6189.51 9272.13 00:20:48.003 =================================================================================================================== 00:20:48.003 Total : 2333.49 291.69 0.00 0.00 6853.32 6189.51 9272.13 00:20:48.003 0 00:20:48.004 19:51:29 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:48.004 19:51:29 -- host/digest.sh@93 -- # get_accel_stats 00:20:48.004 19:51:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:48.004 19:51:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:48.004 | select(.opcode=="crc32c") 00:20:48.004 | "\(.module_name) \(.executed)"' 00:20:48.004 19:51:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:48.262 19:51:29 -- host/digest.sh@94 -- # false 00:20:48.262 19:51:29 -- host/digest.sh@94 -- # exp_module=software 00:20:48.262 19:51:29 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:48.262 19:51:29 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:48.262 19:51:29 -- host/digest.sh@98 -- # killprocess 1770180 00:20:48.262 19:51:29 -- common/autotest_common.sh@936 -- # '[' -z 1770180 ']' 00:20:48.262 19:51:29 -- common/autotest_common.sh@940 -- # kill -0 1770180 00:20:48.262 19:51:29 -- common/autotest_common.sh@941 -- # uname 00:20:48.262 19:51:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:48.262 19:51:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1770180 00:20:48.262 19:51:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:48.262 19:51:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:48.262 19:51:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1770180' 00:20:48.262 killing process with pid 1770180 00:20:48.262 19:51:29 -- common/autotest_common.sh@955 -- # kill 1770180 00:20:48.262 Received shutdown signal, test time was about 2.000000 seconds 00:20:48.262 00:20:48.262 Latency(us) 00:20:48.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.262 =================================================================================================================== 00:20:48.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.262 19:51:29 -- common/autotest_common.sh@960 -- # wait 1770180 00:20:48.521 19:51:29 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:48.521 19:51:29 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:48.521 19:51:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:48.521 19:51:29 -- host/digest.sh@80 -- # rw=randwrite 00:20:48.521 19:51:29 -- host/digest.sh@80 -- # bs=4096 00:20:48.521 19:51:29 -- host/digest.sh@80 -- # qd=128 00:20:48.521 19:51:29 -- host/digest.sh@80 -- # scan_dsa=false 00:20:48.521 19:51:29 -- host/digest.sh@83 -- # bperfpid=1770588 00:20:48.521 19:51:29 -- host/digest.sh@84 -- # waitforlisten 1770588 /var/tmp/bperf.sock 00:20:48.521 19:51:29 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:48.521 19:51:29 -- common/autotest_common.sh@817 -- # '[' -z 1770588 ']' 00:20:48.521 19:51:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:48.521 19:51:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:48.521 19:51:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:48.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:48.521 19:51:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:48.521 19:51:29 -- common/autotest_common.sh@10 -- # set +x 00:20:48.521 [2024-04-24 19:51:29.989278] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:20:48.521 [2024-04-24 19:51:29.989374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770588 ] 00:20:48.521 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.780 [2024-04-24 19:51:30.055947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.780 [2024-04-24 19:51:30.174009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.780 19:51:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:48.780 19:51:30 -- common/autotest_common.sh@850 -- # return 0 00:20:48.780 19:51:30 -- host/digest.sh@86 -- # false 00:20:48.780 19:51:30 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:48.780 19:51:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:49.039 19:51:30 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:49.039 19:51:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:49.606 nvme0n1 00:20:49.606 19:51:30 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:49.606 19:51:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:49.606 Running I/O for 2 seconds... 00:20:52.170 00:20:52.170 Latency(us) 00:20:52.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.170 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:52.170 nvme0n1 : 2.00 21097.16 82.41 0.00 0.00 6060.90 3179.71 14369.37 00:20:52.170 =================================================================================================================== 00:20:52.170 Total : 21097.16 82.41 0.00 0.00 6060.90 3179.71 14369.37 00:20:52.170 0 00:20:52.170 19:51:33 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:52.170 19:51:33 -- host/digest.sh@93 -- # get_accel_stats 00:20:52.170 19:51:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:52.170 19:51:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:52.170 19:51:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:52.170 | select(.opcode=="crc32c") 00:20:52.170 | "\(.module_name) \(.executed)"' 00:20:52.170 19:51:33 -- host/digest.sh@94 -- # false 00:20:52.170 19:51:33 -- host/digest.sh@94 -- # exp_module=software 00:20:52.170 19:51:33 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:52.170 19:51:33 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:52.170 19:51:33 -- host/digest.sh@98 -- # killprocess 1770588 00:20:52.170 19:51:33 -- common/autotest_common.sh@936 -- # '[' -z 1770588 ']' 00:20:52.170 19:51:33 -- common/autotest_common.sh@940 -- # kill -0 1770588 00:20:52.170 19:51:33 -- common/autotest_common.sh@941 -- # uname 00:20:52.170 19:51:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:52.170 19:51:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1770588 00:20:52.170 19:51:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:52.170 19:51:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:52.170 19:51:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1770588' 00:20:52.170 killing process with pid 1770588 00:20:52.170 19:51:33 -- common/autotest_common.sh@955 -- # kill 1770588 00:20:52.170 Received shutdown signal, test time was about 2.000000 seconds 00:20:52.170 00:20:52.170 Latency(us) 00:20:52.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.170 =================================================================================================================== 00:20:52.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:52.170 19:51:33 -- common/autotest_common.sh@960 -- # wait 1770588 00:20:52.171 19:51:33 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:52.171 19:51:33 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:52.171 19:51:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:52.171 19:51:33 -- host/digest.sh@80 -- # rw=randwrite 00:20:52.171 19:51:33 -- host/digest.sh@80 -- # bs=131072 00:20:52.171 19:51:33 -- host/digest.sh@80 -- # qd=16 00:20:52.171 19:51:33 -- host/digest.sh@80 -- # scan_dsa=false 00:20:52.171 19:51:33 -- host/digest.sh@83 -- # bperfpid=1771001 00:20:52.171 19:51:33 -- host/digest.sh@84 -- # waitforlisten 1771001 /var/tmp/bperf.sock 00:20:52.171 19:51:33 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:52.171 19:51:33 -- common/autotest_common.sh@817 -- # '[' -z 1771001 ']' 00:20:52.171 19:51:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:52.171 19:51:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:52.171 19:51:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:52.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:52.171 19:51:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:52.171 19:51:33 -- common/autotest_common.sh@10 -- # set +x 00:20:52.171 [2024-04-24 19:51:33.636544] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:20:52.171 [2024-04-24 19:51:33.636637] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771001 ] 00:20:52.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:52.171 Zero copy mechanism will not be used. 00:20:52.171 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.429 [2024-04-24 19:51:33.697785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.429 [2024-04-24 19:51:33.812921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.429 19:51:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:52.429 19:51:33 -- common/autotest_common.sh@850 -- # return 0 00:20:52.429 19:51:33 -- host/digest.sh@86 -- # false 00:20:52.429 19:51:33 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:52.429 19:51:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:52.686 19:51:34 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:52.686 19:51:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:53.252 nvme0n1 00:20:53.252 19:51:34 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:53.252 19:51:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:53.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:53.252 Zero copy mechanism will not be used. 00:20:53.253 Running I/O for 2 seconds... 00:20:55.789 00:20:55.789 Latency(us) 00:20:55.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.789 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:55.789 nvme0n1 : 2.01 1358.83 169.85 0.00 0.00 11741.63 2949.12 16990.81 00:20:55.789 =================================================================================================================== 00:20:55.789 Total : 1358.83 169.85 0.00 0.00 11741.63 2949.12 16990.81 00:20:55.789 0 00:20:55.789 19:51:36 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:55.789 19:51:36 -- host/digest.sh@93 -- # get_accel_stats 00:20:55.789 19:51:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:55.789 19:51:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:55.789 19:51:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:55.789 | select(.opcode=="crc32c") 00:20:55.789 | "\(.module_name) \(.executed)"' 00:20:55.789 19:51:37 -- host/digest.sh@94 -- # false 00:20:55.789 19:51:37 -- host/digest.sh@94 -- # exp_module=software 00:20:55.789 19:51:37 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:55.789 19:51:37 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:55.789 19:51:37 -- host/digest.sh@98 -- # killprocess 1771001 00:20:55.789 19:51:37 -- common/autotest_common.sh@936 -- # '[' -z 1771001 ']' 00:20:55.789 19:51:37 -- common/autotest_common.sh@940 -- # kill -0 1771001 00:20:55.789 19:51:37 -- common/autotest_common.sh@941 -- # uname 00:20:55.789 19:51:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:55.789 19:51:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1771001 00:20:55.789 19:51:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:55.789 19:51:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:55.789 19:51:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1771001' 00:20:55.789 killing process with pid 1771001 00:20:55.789 19:51:37 -- common/autotest_common.sh@955 -- # kill 1771001 00:20:55.789 Received shutdown signal, test time was about 2.000000 seconds 00:20:55.789 00:20:55.789 Latency(us) 00:20:55.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.789 =================================================================================================================== 00:20:55.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.789 19:51:37 -- common/autotest_common.sh@960 -- # wait 1771001 00:20:55.789 19:51:37 -- host/digest.sh@132 -- # killprocess 1769632 00:20:55.789 19:51:37 -- common/autotest_common.sh@936 -- # '[' -z 1769632 ']' 00:20:55.789 19:51:37 -- common/autotest_common.sh@940 -- # kill -0 1769632 00:20:55.789 19:51:37 -- common/autotest_common.sh@941 -- # uname 00:20:56.048 19:51:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:56.048 19:51:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1769632 00:20:56.048 19:51:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:56.048 19:51:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:56.048 19:51:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1769632' 00:20:56.048 killing process with pid 1769632 00:20:56.048 19:51:37 -- common/autotest_common.sh@955 -- # kill 1769632 00:20:56.048 19:51:37 -- common/autotest_common.sh@960 -- # wait 1769632 00:20:56.306 00:20:56.306 real 0m15.540s 00:20:56.306 user 0m30.372s 00:20:56.306 sys 0m4.038s 00:20:56.306 19:51:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:56.306 19:51:37 -- common/autotest_common.sh@10 -- # set +x 00:20:56.306 ************************************ 00:20:56.306 END TEST nvmf_digest_clean 00:20:56.306 ************************************ 00:20:56.306 19:51:37 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:56.306 19:51:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:56.306 19:51:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:56.306 19:51:37 -- common/autotest_common.sh@10 -- # set +x 00:20:56.306 ************************************ 00:20:56.306 START TEST nvmf_digest_error 00:20:56.306 ************************************ 00:20:56.306 19:51:37 -- common/autotest_common.sh@1111 -- # run_digest_error 00:20:56.306 19:51:37 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:56.306 19:51:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:56.306 19:51:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:56.306 19:51:37 -- common/autotest_common.sh@10 -- # set +x 00:20:56.306 19:51:37 -- nvmf/common.sh@470 -- # nvmfpid=1771558 00:20:56.306 19:51:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:56.306 19:51:37 -- nvmf/common.sh@471 -- # waitforlisten 1771558 00:20:56.306 19:51:37 -- common/autotest_common.sh@817 -- # '[' -z 1771558 ']' 00:20:56.306 19:51:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.306 19:51:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:56.306 19:51:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.306 19:51:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:56.306 19:51:37 -- common/autotest_common.sh@10 -- # set +x 00:20:56.306 [2024-04-24 19:51:37.767227] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:20:56.306 [2024-04-24 19:51:37.767330] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.306 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.565 [2024-04-24 19:51:37.836528] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.565 [2024-04-24 19:51:37.949855] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.565 [2024-04-24 19:51:37.949933] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.565 [2024-04-24 19:51:37.949949] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.565 [2024-04-24 19:51:37.949981] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.565 [2024-04-24 19:51:37.949993] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.565 [2024-04-24 19:51:37.950028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.498 19:51:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.498 19:51:38 -- common/autotest_common.sh@850 -- # return 0 00:20:57.498 19:51:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:57.498 19:51:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:57.498 19:51:38 -- common/autotest_common.sh@10 -- # set +x 00:20:57.498 19:51:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.498 19:51:38 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:57.498 19:51:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.498 19:51:38 -- common/autotest_common.sh@10 -- # set +x 00:20:57.498 [2024-04-24 19:51:38.752515] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:57.498 19:51:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.498 19:51:38 -- host/digest.sh@105 -- # common_target_config 00:20:57.498 19:51:38 -- host/digest.sh@43 -- # rpc_cmd 00:20:57.498 19:51:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:57.498 19:51:38 -- common/autotest_common.sh@10 -- # set +x 00:20:57.498 null0 00:20:57.498 [2024-04-24 19:51:38.871774] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.498 [2024-04-24 19:51:38.895992] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.498 19:51:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:57.498 19:51:38 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:57.498 19:51:38 -- host/digest.sh@54 -- # local rw bs qd 00:20:57.498 19:51:38 -- host/digest.sh@56 -- # rw=randread 00:20:57.498 19:51:38 -- host/digest.sh@56 -- # bs=4096 00:20:57.498 19:51:38 -- host/digest.sh@56 -- # qd=128 00:20:57.498 19:51:38 -- host/digest.sh@58 -- # bperfpid=1771710 00:20:57.498 19:51:38 -- host/digest.sh@60 -- # waitforlisten 1771710 /var/tmp/bperf.sock 00:20:57.498 19:51:38 -- common/autotest_common.sh@817 -- # '[' -z 1771710 ']' 00:20:57.498 19:51:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:57.498 19:51:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:57.498 19:51:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:57.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:57.498 19:51:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:57.498 19:51:38 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:57.498 19:51:38 -- common/autotest_common.sh@10 -- # set +x 00:20:57.498 [2024-04-24 19:51:38.944399] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:20:57.498 [2024-04-24 19:51:38.944471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1771710 ] 00:20:57.498 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.498 [2024-04-24 19:51:39.002207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.756 [2024-04-24 19:51:39.111282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.756 19:51:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.756 19:51:39 -- common/autotest_common.sh@850 -- # return 0 00:20:57.756 19:51:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:57.756 19:51:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:58.014 19:51:39 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:58.014 19:51:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.014 19:51:39 -- common/autotest_common.sh@10 -- # set +x 00:20:58.014 19:51:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.014 19:51:39 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:58.014 19:51:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:58.581 nvme0n1 00:20:58.581 19:51:39 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:58.581 19:51:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.581 19:51:39 -- common/autotest_common.sh@10 -- # set +x 00:20:58.581 19:51:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.581 19:51:39 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:58.581 19:51:39 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:58.581 Running I/O for 2 seconds... 00:20:58.839 [2024-04-24 19:51:40.095772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.095834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.095856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.112617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.112678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.112697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.124685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.124719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.124735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.140270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.140306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.140325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.155540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.155575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.155594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.168016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.168050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.168069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.183435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.183470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.183494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.198201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.198235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.198254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.211338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.211373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.211393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.225835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.839 [2024-04-24 19:51:40.225865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.839 [2024-04-24 19:51:40.225881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.839 [2024-04-24 19:51:40.239952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.840 [2024-04-24 19:51:40.240003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.840 [2024-04-24 19:51:40.240021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.840 [2024-04-24 19:51:40.252418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.840 [2024-04-24 19:51:40.252452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.840 [2024-04-24 19:51:40.252470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.840 [2024-04-24 19:51:40.268443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.840 [2024-04-24 19:51:40.268478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.840 [2024-04-24 19:51:40.268497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.840 [2024-04-24 19:51:40.281995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.840 [2024-04-24 19:51:40.282030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.840 [2024-04-24 19:51:40.282049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.840 [2024-04-24 19:51:40.296244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.840 [2024-04-24 19:51:40.296280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.840 [2024-04-24 19:51:40.296300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.840 [2024-04-24 19:51:40.311055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.840 [2024-04-24 19:51:40.311091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.840 [2024-04-24 19:51:40.311116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.840 [2024-04-24 19:51:40.324912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.840 [2024-04-24 19:51:40.324962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.840 [2024-04-24 19:51:40.324981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:58.840 [2024-04-24 19:51:40.338573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:58.840 [2024-04-24 19:51:40.338608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:58.840 [2024-04-24 19:51:40.338626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.098 [2024-04-24 19:51:40.354020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.098 [2024-04-24 19:51:40.354057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.098 [2024-04-24 19:51:40.354076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.367289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.367325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.367344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.381441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.381475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.381494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.395676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.395707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.395724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.409968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.409999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.410015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.423765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.423796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.423813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.435899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.435932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.435949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.449345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.449374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.449391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.463029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.463060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.463076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.476106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.476137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.476154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.489512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.489543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.489559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.501529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.501562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.501594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.514850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.514880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.514897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.528637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.528682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.528698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.541761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.541790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.541812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.553081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.553111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.553127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.566617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.566668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.566685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.580755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.580801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.580818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.593129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.593159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.593176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.099 [2024-04-24 19:51:40.607398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.099 [2024-04-24 19:51:40.607430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.099 [2024-04-24 19:51:40.607447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.619754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.619787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.619804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.631599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.631647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.631665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.646316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.646347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.646364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.658772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.658809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.658827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.672310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.672341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.672359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.684279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.684307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.684322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.698442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.698474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.698491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.708848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.708879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.708896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.723713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.723743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.723761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.736592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.736645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.736664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.751403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.751434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.751450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.761980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.762025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.762042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.776480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.776509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.776525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.789935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.789966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.789983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.801958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.801990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.802007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.815092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.815127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.358 [2024-04-24 19:51:40.815146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.358 [2024-04-24 19:51:40.829979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.358 [2024-04-24 19:51:40.830013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.359 [2024-04-24 19:51:40.830032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.359 [2024-04-24 19:51:40.842128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.359 [2024-04-24 19:51:40.842162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.359 [2024-04-24 19:51:40.842180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.359 [2024-04-24 19:51:40.857908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.359 [2024-04-24 19:51:40.857942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.359 [2024-04-24 19:51:40.857973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.359 [2024-04-24 19:51:40.869318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.359 [2024-04-24 19:51:40.869352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.359 [2024-04-24 19:51:40.869371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:40.885506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:40.885541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:40.885567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:40.899116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:40.899150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:40.899168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:40.912509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:40.912543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:40.912562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:40.927337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:40.927371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:40.927390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:40.939513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:40.939547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:40.939565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:40.955817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:40.955863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:40.955879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:40.970261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:40.970294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:40.970313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:40.985022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:40.985057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:40.985075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:40.999220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:40.999253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:40.999271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:41.011874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:41.011905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:41.011922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.617 [2024-04-24 19:51:41.025706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.617 [2024-04-24 19:51:41.025737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.617 [2024-04-24 19:51:41.025753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.618 [2024-04-24 19:51:41.040364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.618 [2024-04-24 19:51:41.040398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.618 [2024-04-24 19:51:41.040417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.618 [2024-04-24 19:51:41.053962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.618 [2024-04-24 19:51:41.053996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.618 [2024-04-24 19:51:41.054014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.618 [2024-04-24 19:51:41.068745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.618 [2024-04-24 19:51:41.068777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.618 [2024-04-24 19:51:41.068793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.618 [2024-04-24 19:51:41.082063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.618 [2024-04-24 19:51:41.082096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.618 [2024-04-24 19:51:41.082115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.618 [2024-04-24 19:51:41.096387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.618 [2024-04-24 19:51:41.096421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.618 [2024-04-24 19:51:41.096440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.618 [2024-04-24 19:51:41.111101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.618 [2024-04-24 19:51:41.111135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.618 [2024-04-24 19:51:41.111154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.618 [2024-04-24 19:51:41.123205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.618 [2024-04-24 19:51:41.123239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.618 [2024-04-24 19:51:41.123263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.877 [2024-04-24 19:51:41.136923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.877 [2024-04-24 19:51:41.136958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.877 [2024-04-24 19:51:41.136976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.877 [2024-04-24 19:51:41.152567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.877 [2024-04-24 19:51:41.152601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.877 [2024-04-24 19:51:41.152620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.877 [2024-04-24 19:51:41.164446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.877 [2024-04-24 19:51:41.164480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.877 [2024-04-24 19:51:41.164498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.877 [2024-04-24 19:51:41.178178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.877 [2024-04-24 19:51:41.178212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.877 [2024-04-24 19:51:41.178231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.877 [2024-04-24 19:51:41.193817] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.877 [2024-04-24 19:51:41.193849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.877 [2024-04-24 19:51:41.193866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.877 [2024-04-24 19:51:41.208000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.877 [2024-04-24 19:51:41.208035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.877 [2024-04-24 19:51:41.208054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.877 [2024-04-24 19:51:41.220228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.220261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.220281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.234777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.234807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.234824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.250431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.250471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.250491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.263439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.263474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.263493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.277571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.277606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.277625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.292884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.292932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.292951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.305904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.305950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.305967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.319844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.319874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.319906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.332719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.332750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.332767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.348288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.348324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.348342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.363221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.363255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.363273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.375326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.375360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.375379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:59.878 [2024-04-24 19:51:41.388757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:20:59.878 [2024-04-24 19:51:41.388800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:59.878 [2024-04-24 19:51:41.388818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.136 [2024-04-24 19:51:41.405528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.136 [2024-04-24 19:51:41.405563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-04-24 19:51:41.405582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.136 [2024-04-24 19:51:41.418255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.136 [2024-04-24 19:51:41.418289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-04-24 19:51:41.418307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.136 [2024-04-24 19:51:41.433184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.136 [2024-04-24 19:51:41.433218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-04-24 19:51:41.433236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.136 [2024-04-24 19:51:41.446096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.136 [2024-04-24 19:51:41.446131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-04-24 19:51:41.446149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.136 [2024-04-24 19:51:41.459937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.136 [2024-04-24 19:51:41.459972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.136 [2024-04-24 19:51:41.459990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.472819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.472849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.472865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.487550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.487584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.487609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.502474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.502508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.502527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.514750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.514780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.514797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.529138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.529172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.529191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.543403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.543437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.543456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.557131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.557167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.557186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.570121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.570155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.570173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.585947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.585978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.586011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.598080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.598115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.598134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.613378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.613412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.613431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.625489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.625523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.625542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.137 [2024-04-24 19:51:41.640257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.137 [2024-04-24 19:51:41.640291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.137 [2024-04-24 19:51:41.640310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.395 [2024-04-24 19:51:41.654223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.654258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.654277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.667781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.667812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.667829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.681869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.681900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.681916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.695939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.695989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.696008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.711375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.711410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.711428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.723108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.723142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.723166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.736876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.736907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.736939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.751307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.751341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.751360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.766290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.766325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.766343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.779745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.779777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.779794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.794011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.794045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.794063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.807413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.807447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.807466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.821894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.821943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.821962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.835088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.835122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.835141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.849040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.849080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.849100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.864465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.864500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.864519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.876652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.876701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.876719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.891229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.891264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.891282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.396 [2024-04-24 19:51:41.906516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.396 [2024-04-24 19:51:41.906550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.396 [2024-04-24 19:51:41.906569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.654 [2024-04-24 19:51:41.920560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.654 [2024-04-24 19:51:41.920595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.654 [2024-04-24 19:51:41.920614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.654 [2024-04-24 19:51:41.932895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:41.932925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:41.932941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:41.949002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:41.949037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:41.949056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:41.960994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:41.961027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:41.961046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:41.975019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:41.975053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:41.975072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:41.989778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:41.989810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:41.989827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:42.003335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:42.003369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:42.003387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:42.017877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:42.017907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:42.017924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:42.030374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:42.030408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:42.030426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:42.045503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:42.045537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:42.045555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:42.059691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:42.059722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:42.059738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 [2024-04-24 19:51:42.072465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cafaa0) 00:21:00.655 [2024-04-24 19:51:42.072497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:00.655 [2024-04-24 19:51:42.072515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:00.655 00:21:00.655 Latency(us) 00:21:00.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.655 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:00.655 nvme0n1 : 2.00 18399.49 71.87 0.00 0.00 6947.59 3155.44 17961.72 00:21:00.655 =================================================================================================================== 00:21:00.655 Total : 18399.49 71.87 0.00 0.00 6947.59 3155.44 17961.72 00:21:00.655 0 00:21:00.655 19:51:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:00.655 19:51:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:00.655 19:51:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:00.655 | .driver_specific 00:21:00.655 | .nvme_error 00:21:00.655 | .status_code 00:21:00.655 | .command_transient_transport_error' 00:21:00.655 19:51:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:00.914 19:51:42 -- host/digest.sh@71 -- # (( 144 > 0 )) 00:21:00.914 19:51:42 -- host/digest.sh@73 -- # killprocess 1771710 00:21:00.914 19:51:42 -- common/autotest_common.sh@936 -- # '[' -z 1771710 ']' 00:21:00.914 19:51:42 -- common/autotest_common.sh@940 -- # kill -0 1771710 00:21:00.914 19:51:42 -- common/autotest_common.sh@941 -- # uname 00:21:00.914 19:51:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.914 19:51:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1771710 00:21:00.914 19:51:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:00.914 19:51:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:00.914 19:51:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1771710' 00:21:00.914 killing process with pid 1771710 00:21:00.914 19:51:42 -- common/autotest_common.sh@955 -- # kill 1771710 00:21:00.914 Received shutdown signal, test time was about 2.000000 seconds 00:21:00.914 00:21:00.914 Latency(us) 00:21:00.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.914 =================================================================================================================== 00:21:00.914 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.914 19:51:42 -- common/autotest_common.sh@960 -- # wait 1771710 00:21:01.172 19:51:42 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:01.172 19:51:42 -- host/digest.sh@54 -- # local rw bs qd 00:21:01.172 19:51:42 -- host/digest.sh@56 -- # rw=randread 00:21:01.172 19:51:42 -- host/digest.sh@56 -- # bs=131072 00:21:01.172 19:51:42 -- host/digest.sh@56 -- # qd=16 00:21:01.172 19:51:42 -- host/digest.sh@58 -- # bperfpid=1772127 00:21:01.172 19:51:42 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:01.172 19:51:42 -- host/digest.sh@60 -- # waitforlisten 1772127 /var/tmp/bperf.sock 00:21:01.172 19:51:42 -- common/autotest_common.sh@817 -- # '[' -z 1772127 ']' 00:21:01.172 19:51:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:01.172 19:51:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:01.172 19:51:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:01.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:01.173 19:51:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:01.173 19:51:42 -- common/autotest_common.sh@10 -- # set +x 00:21:01.173 [2024-04-24 19:51:42.685157] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:01.173 [2024-04-24 19:51:42.685237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772127 ] 00:21:01.173 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:01.173 Zero copy mechanism will not be used. 00:21:01.431 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.431 [2024-04-24 19:51:42.743703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.431 [2024-04-24 19:51:42.851166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.688 19:51:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.688 19:51:42 -- common/autotest_common.sh@850 -- # return 0 00:21:01.688 19:51:42 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:01.688 19:51:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:01.946 19:51:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:01.946 19:51:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.946 19:51:43 -- common/autotest_common.sh@10 -- # set +x 00:21:01.946 19:51:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.946 19:51:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.946 19:51:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:02.204 nvme0n1 00:21:02.204 19:51:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:02.204 19:51:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:02.204 19:51:43 -- common/autotest_common.sh@10 -- # set +x 00:21:02.204 19:51:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:02.204 19:51:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:02.204 19:51:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:02.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:02.204 Zero copy mechanism will not be used. 00:21:02.204 Running I/O for 2 seconds... 00:21:02.204 [2024-04-24 19:51:43.687272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.204 [2024-04-24 19:51:43.687328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.204 [2024-04-24 19:51:43.687366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.204 [2024-04-24 19:51:43.700930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.204 [2024-04-24 19:51:43.700966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.204 [2024-04-24 19:51:43.700986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.204 [2024-04-24 19:51:43.713722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.204 [2024-04-24 19:51:43.713751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.204 [2024-04-24 19:51:43.713768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.462 [2024-04-24 19:51:43.726439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.462 [2024-04-24 19:51:43.726469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.462 [2024-04-24 19:51:43.726486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.462 [2024-04-24 19:51:43.739843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.462 [2024-04-24 19:51:43.739872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.462 [2024-04-24 19:51:43.739888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.462 [2024-04-24 19:51:43.752314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.462 [2024-04-24 19:51:43.752357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.462 [2024-04-24 19:51:43.752377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.462 [2024-04-24 19:51:43.764887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.462 [2024-04-24 19:51:43.764916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.462 [2024-04-24 19:51:43.764932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.462 [2024-04-24 19:51:43.777246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.462 [2024-04-24 19:51:43.777278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.462 [2024-04-24 19:51:43.777297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.462 [2024-04-24 19:51:43.789818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.462 [2024-04-24 19:51:43.789850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.462 [2024-04-24 19:51:43.789868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.802072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.802102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.802118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.814615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.814654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.814674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.827002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.827034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.827054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.839566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.839598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.839617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.852515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.852546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.852564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.865004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.865036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.865055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.877719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.877747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.877763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.890003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.890036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.890054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.902267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.902300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.902318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.915059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.915091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.915109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.927283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.927310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.927325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.940094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.940125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.940143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.952591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.952623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.952650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.463 [2024-04-24 19:51:43.965041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.463 [2024-04-24 19:51:43.965073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.463 [2024-04-24 19:51:43.965098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:43.977978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:43.978012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:43.978030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:43.991041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:43.991074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:43.991091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.003863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.003891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.003907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.017046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.017080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.017099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.030166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.030199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.030218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.043194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.043226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.043244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.056239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.056271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.056290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.068861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.068905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.068921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.081257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.081301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.081317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.093841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.093871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.093888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.106344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.106372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.106405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.118892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.118922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.118939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.131456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.131499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.131515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.143924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.143952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.143984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.156388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.156430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.156446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.168843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.168872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.168889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.181415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.181444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.181482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.193683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.193713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.193729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.206030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.206059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.206091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.219075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.219120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.219136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.722 [2024-04-24 19:51:44.231601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.722 [2024-04-24 19:51:44.231642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.722 [2024-04-24 19:51:44.231661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.243877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.243908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.243925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.256593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.256644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.256665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.269271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.269300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.269332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.281780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.281810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.281826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.294249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.294283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.294316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.306698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.306727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.306744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.319261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.319304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.319320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.332049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.332078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.332093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.344806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.344851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.344868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.357325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.357369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.357385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.369963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.369992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.370024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.382496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.382524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.989 [2024-04-24 19:51:44.382555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.989 [2024-04-24 19:51:44.395064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.989 [2024-04-24 19:51:44.395093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.990 [2024-04-24 19:51:44.395109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.990 [2024-04-24 19:51:44.407667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.990 [2024-04-24 19:51:44.407696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.990 [2024-04-24 19:51:44.407728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.990 [2024-04-24 19:51:44.420079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.990 [2024-04-24 19:51:44.420108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.990 [2024-04-24 19:51:44.420139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.990 [2024-04-24 19:51:44.433263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.990 [2024-04-24 19:51:44.433294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.990 [2024-04-24 19:51:44.433311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.990 [2024-04-24 19:51:44.446640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.990 [2024-04-24 19:51:44.446672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.990 [2024-04-24 19:51:44.446688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.990 [2024-04-24 19:51:44.459026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.990 [2024-04-24 19:51:44.459053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.990 [2024-04-24 19:51:44.459085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:02.990 [2024-04-24 19:51:44.471398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.990 [2024-04-24 19:51:44.471428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.990 [2024-04-24 19:51:44.471461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:02.990 [2024-04-24 19:51:44.483701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.990 [2024-04-24 19:51:44.483731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.990 [2024-04-24 19:51:44.483747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:02.990 [2024-04-24 19:51:44.496078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:02.990 [2024-04-24 19:51:44.496109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:02.990 [2024-04-24 19:51:44.496141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.508362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.508407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.508432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.520847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.520892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.520908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.533530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.533562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.533594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.546130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.546160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.546176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.558696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.558726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.558743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.571130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.571158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.571187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.583760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.583789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.583805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.596200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.596228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.596244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.608728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.608773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.608790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.248 [2024-04-24 19:51:44.621318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.248 [2024-04-24 19:51:44.621348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.248 [2024-04-24 19:51:44.621380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.633989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.634018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.634050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.646435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.646478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.646493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.659107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.659137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.659154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.671777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.671823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.671840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.684188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.684233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.684249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.697444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.697489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.697506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.709837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.709867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.709884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.722188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.722217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.722240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.734607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.734659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.734678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.747224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.747269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.249 [2024-04-24 19:51:44.760003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.249 [2024-04-24 19:51:44.760048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.249 [2024-04-24 19:51:44.760066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.773228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.773273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.773290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.785809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.785838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.785854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.798616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.798656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.798690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.811526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.811569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.811584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.824377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.824410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.824428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.837073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.837111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.837131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.850230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.850261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.850279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.862874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.862917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.862934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.875653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.875697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.875713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.888270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.888302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.888320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.900909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.900946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.900962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.913754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.913782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.507 [2024-04-24 19:51:44.913798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.507 [2024-04-24 19:51:44.926270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.507 [2024-04-24 19:51:44.926300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.508 [2024-04-24 19:51:44.926318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.508 [2024-04-24 19:51:44.938729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.508 [2024-04-24 19:51:44.938757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.508 [2024-04-24 19:51:44.938773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.508 [2024-04-24 19:51:44.951170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.508 [2024-04-24 19:51:44.951198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.508 [2024-04-24 19:51:44.951214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.508 [2024-04-24 19:51:44.963736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.508 [2024-04-24 19:51:44.963774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.508 [2024-04-24 19:51:44.963790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.508 [2024-04-24 19:51:44.976317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.508 [2024-04-24 19:51:44.976349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.508 [2024-04-24 19:51:44.976367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.508 [2024-04-24 19:51:44.989165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.508 [2024-04-24 19:51:44.989197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.508 [2024-04-24 19:51:44.989215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.508 [2024-04-24 19:51:45.001868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.508 [2024-04-24 19:51:45.001897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.508 [2024-04-24 19:51:45.001913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.508 [2024-04-24 19:51:45.014493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.508 [2024-04-24 19:51:45.014525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.508 [2024-04-24 19:51:45.014543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.027192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.027225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.027243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.040041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.040074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.040092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.053068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.053101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.053125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.065672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.065716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.065732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.078433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.078464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.078482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.091221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.091253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.091271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.104067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.104099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.104118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.116736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.116766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.116782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.129566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.129597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.129614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.142486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.142518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.142537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.155434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.155466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.155483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.168270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.168308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.168326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.181110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.181142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.181160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.193959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.194004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.194022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.206515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.206547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.206564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.219467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.219513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.219531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.232476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.232518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.232534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.245347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.245375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.245389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.258324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.258356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.258374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:03.766 [2024-04-24 19:51:45.271345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:03.766 [2024-04-24 19:51:45.271378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:03.766 [2024-04-24 19:51:45.271396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:04.024 [2024-04-24 19:51:45.283904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.024 [2024-04-24 19:51:45.283933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.024 [2024-04-24 19:51:45.283950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.296782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.296810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.296826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.309595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.309652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.309671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.322347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.322375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.322408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.335238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.335270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.335287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.348149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.348179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.348195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.361011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.361043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.361061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.373912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.373940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.373956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.387018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.387059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.387078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.399869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.399897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.399913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.412716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.412759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.412775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.425606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.425646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.425666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.438269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.438300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.438318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.450986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.451018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.451035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.464075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.464107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.464125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.477064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.477095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.477113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.489547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.489579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.489597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.502537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.502571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.502589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.515286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.515319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.515337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:04.025 [2024-04-24 19:51:45.528206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.025 [2024-04-24 19:51:45.528239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.025 [2024-04-24 19:51:45.528257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:04.291 [2024-04-24 19:51:45.540962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.291 [2024-04-24 19:51:45.541007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.291 [2024-04-24 19:51:45.541023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:04.291 [2024-04-24 19:51:45.554107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.291 [2024-04-24 19:51:45.554140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.291 [2024-04-24 19:51:45.554158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:04.291 [2024-04-24 19:51:45.567025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.291 [2024-04-24 19:51:45.567057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.291 [2024-04-24 19:51:45.567075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:04.291 [2024-04-24 19:51:45.579602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.291 [2024-04-24 19:51:45.579642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.291 [2024-04-24 19:51:45.579662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:04.291 [2024-04-24 19:51:45.592432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.291 [2024-04-24 19:51:45.592464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.291 [2024-04-24 19:51:45.592482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:04.291 [2024-04-24 19:51:45.605375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.291 [2024-04-24 19:51:45.605407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.291 [2024-04-24 19:51:45.605431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:04.291 [2024-04-24 19:51:45.618225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.291 [2024-04-24 19:51:45.618256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.291 [2024-04-24 19:51:45.618274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:04.291 [2024-04-24 19:51:45.631060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.291 [2024-04-24 19:51:45.631093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.291 [2024-04-24 19:51:45.631111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:04.291 [2024-04-24 19:51:45.644034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.292 [2024-04-24 19:51:45.644067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.292 [2024-04-24 19:51:45.644085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:04.292 [2024-04-24 19:51:45.657383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.292 [2024-04-24 19:51:45.657415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.292 [2024-04-24 19:51:45.657434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:04.292 [2024-04-24 19:51:45.670117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.292 [2024-04-24 19:51:45.670150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.292 [2024-04-24 19:51:45.670168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:04.292 [2024-04-24 19:51:45.682625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x679850) 00:21:04.292 [2024-04-24 19:51:45.682660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:04.292 [2024-04-24 19:51:45.682677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:04.292 00:21:04.292 Latency(us) 00:21:04.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.292 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:04.292 nvme0n1 : 2.01 2441.19 305.15 0.00 0.00 6547.59 5995.33 13883.92 00:21:04.292 =================================================================================================================== 00:21:04.292 Total : 2441.19 305.15 0.00 0.00 6547.59 5995.33 13883.92 00:21:04.292 0 00:21:04.292 19:51:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:04.292 19:51:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:04.292 19:51:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:04.292 19:51:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:04.292 | .driver_specific 00:21:04.292 | .nvme_error 00:21:04.292 | .status_code 00:21:04.292 | .command_transient_transport_error' 00:21:04.557 19:51:45 -- host/digest.sh@71 -- # (( 158 > 0 )) 00:21:04.557 19:51:45 -- host/digest.sh@73 -- # killprocess 1772127 00:21:04.557 19:51:45 -- common/autotest_common.sh@936 -- # '[' -z 1772127 ']' 00:21:04.557 19:51:45 -- common/autotest_common.sh@940 -- # kill -0 1772127 00:21:04.557 19:51:45 -- common/autotest_common.sh@941 -- # uname 00:21:04.557 19:51:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.557 19:51:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1772127 00:21:04.557 19:51:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:04.557 19:51:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:04.557 19:51:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1772127' 00:21:04.557 killing process with pid 1772127 00:21:04.557 19:51:46 -- common/autotest_common.sh@955 -- # kill 1772127 00:21:04.557 Received shutdown signal, test time was about 2.000000 seconds 00:21:04.557 00:21:04.557 Latency(us) 00:21:04.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.557 =================================================================================================================== 00:21:04.557 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.557 19:51:46 -- common/autotest_common.sh@960 -- # wait 1772127 00:21:04.815 19:51:46 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:04.815 19:51:46 -- host/digest.sh@54 -- # local rw bs qd 00:21:04.815 19:51:46 -- host/digest.sh@56 -- # rw=randwrite 00:21:04.815 19:51:46 -- host/digest.sh@56 -- # bs=4096 00:21:04.815 19:51:46 -- host/digest.sh@56 -- # qd=128 00:21:04.815 19:51:46 -- host/digest.sh@58 -- # bperfpid=1772538 00:21:04.815 19:51:46 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:04.815 19:51:46 -- host/digest.sh@60 -- # waitforlisten 1772538 /var/tmp/bperf.sock 00:21:04.815 19:51:46 -- common/autotest_common.sh@817 -- # '[' -z 1772538 ']' 00:21:04.815 19:51:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:04.815 19:51:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:04.815 19:51:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:04.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:04.815 19:51:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:04.815 19:51:46 -- common/autotest_common.sh@10 -- # set +x 00:21:05.074 [2024-04-24 19:51:46.330157] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:05.074 [2024-04-24 19:51:46.330238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772538 ] 00:21:05.074 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.074 [2024-04-24 19:51:46.387971] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.074 [2024-04-24 19:51:46.495345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.331 19:51:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:05.331 19:51:46 -- common/autotest_common.sh@850 -- # return 0 00:21:05.331 19:51:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:05.331 19:51:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:05.588 19:51:46 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:05.588 19:51:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.588 19:51:46 -- common/autotest_common.sh@10 -- # set +x 00:21:05.588 19:51:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.588 19:51:46 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.588 19:51:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:05.846 nvme0n1 00:21:05.846 19:51:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:05.846 19:51:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.846 19:51:47 -- common/autotest_common.sh@10 -- # set +x 00:21:05.846 19:51:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.846 19:51:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:05.846 19:51:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:05.846 Running I/O for 2 seconds... 00:21:05.846 [2024-04-24 19:51:47.351612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f6cc8 00:21:05.846 [2024-04-24 19:51:47.352597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:05.846 [2024-04-24 19:51:47.352647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.365083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:06.105 [2024-04-24 19:51:47.366130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.366165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.378205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:06.105 [2024-04-24 19:51:47.379247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.379279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.391292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:06.105 [2024-04-24 19:51:47.392281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.392313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.404285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:06.105 [2024-04-24 19:51:47.405385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.405417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.417278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:06.105 [2024-04-24 19:51:47.418321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.418354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.430222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:06.105 [2024-04-24 19:51:47.431280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.431311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.443286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:06.105 [2024-04-24 19:51:47.444332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.444363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.456244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:06.105 [2024-04-24 19:51:47.457256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.457287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.469216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:06.105 [2024-04-24 19:51:47.470251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.470283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.482150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:06.105 [2024-04-24 19:51:47.483191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.483222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.105 [2024-04-24 19:51:47.495127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:06.105 [2024-04-24 19:51:47.496155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.105 [2024-04-24 19:51:47.496186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.106 [2024-04-24 19:51:47.508213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:06.106 [2024-04-24 19:51:47.509224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.106 [2024-04-24 19:51:47.509255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.106 [2024-04-24 19:51:47.521141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:06.106 [2024-04-24 19:51:47.522178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.106 [2024-04-24 19:51:47.522208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.106 [2024-04-24 19:51:47.534208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:06.106 [2024-04-24 19:51:47.535215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.106 [2024-04-24 19:51:47.535246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.106 [2024-04-24 19:51:47.547151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:06.106 [2024-04-24 19:51:47.548225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.106 [2024-04-24 19:51:47.548256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.106 [2024-04-24 19:51:47.560122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:06.106 [2024-04-24 19:51:47.561138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.106 [2024-04-24 19:51:47.561169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.106 [2024-04-24 19:51:47.573120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:06.106 [2024-04-24 19:51:47.574146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.106 [2024-04-24 19:51:47.574177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.106 [2024-04-24 19:51:47.586102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:06.106 [2024-04-24 19:51:47.587137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.106 [2024-04-24 19:51:47.587170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.106 [2024-04-24 19:51:47.599205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:06.106 [2024-04-24 19:51:47.600211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.106 [2024-04-24 19:51:47.600243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.106 [2024-04-24 19:51:47.612168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:06.106 [2024-04-24 19:51:47.613186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.106 [2024-04-24 19:51:47.613218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.364 [2024-04-24 19:51:47.624960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:06.364 [2024-04-24 19:51:47.626056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.626088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.638070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:06.365 [2024-04-24 19:51:47.639108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.639140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.651147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:06.365 [2024-04-24 19:51:47.652178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.652210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.664214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:06.365 [2024-04-24 19:51:47.665244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.665281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.677187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:06.365 [2024-04-24 19:51:47.678212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.678244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.690155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:06.365 [2024-04-24 19:51:47.691182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.691213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.703165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:06.365 [2024-04-24 19:51:47.704183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.704214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.716102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:06.365 [2024-04-24 19:51:47.717108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.717140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.729061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:06.365 [2024-04-24 19:51:47.730066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.730107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.741999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:06.365 [2024-04-24 19:51:47.743055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.743086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.755029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:06.365 [2024-04-24 19:51:47.756061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.756093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.768102] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:06.365 [2024-04-24 19:51:47.769111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.769142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.781026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:06.365 [2024-04-24 19:51:47.782057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.782094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.794176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:06.365 [2024-04-24 19:51:47.795189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.795221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.807353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:06.365 [2024-04-24 19:51:47.808363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.808395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.820313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:06.365 [2024-04-24 19:51:47.821343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.821375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.833129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:06.365 [2024-04-24 19:51:47.834158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.834189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.846182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:06.365 [2024-04-24 19:51:47.847263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.847291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.858388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:06.365 [2024-04-24 19:51:47.859447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.859474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.365 [2024-04-24 19:51:47.870517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:06.365 [2024-04-24 19:51:47.871578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.365 [2024-04-24 19:51:47.871606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.882748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:06.624 [2024-04-24 19:51:47.883805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.883833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.894786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:06.624 [2024-04-24 19:51:47.895762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.895790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.906724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:06.624 [2024-04-24 19:51:47.907801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.907829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.918698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:06.624 [2024-04-24 19:51:47.919658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.919686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.930733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:06.624 [2024-04-24 19:51:47.931673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.931702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.942699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:06.624 [2024-04-24 19:51:47.943701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.943730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.954772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:06.624 [2024-04-24 19:51:47.955783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.955811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.966744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:06.624 [2024-04-24 19:51:47.967732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.967760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.978790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:06.624 [2024-04-24 19:51:47.979732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.979760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:47.990795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:06.624 [2024-04-24 19:51:47.991809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:47.991838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:48.002844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:06.624 [2024-04-24 19:51:48.003824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:48.003852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:48.014873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:06.624 [2024-04-24 19:51:48.015867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:48.015895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:48.026844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:06.624 [2024-04-24 19:51:48.027866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:48.027894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:48.038800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:06.624 [2024-04-24 19:51:48.039807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:48.039834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:48.050790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:06.624 [2024-04-24 19:51:48.051801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:48.051829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:48.062817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:06.624 [2024-04-24 19:51:48.063802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:48.063830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:48.074807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:06.624 [2024-04-24 19:51:48.075807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:48.075835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:48.086813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:06.624 [2024-04-24 19:51:48.087801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.624 [2024-04-24 19:51:48.087829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.624 [2024-04-24 19:51:48.098875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:06.624 [2024-04-24 19:51:48.099833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.625 [2024-04-24 19:51:48.099867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.625 [2024-04-24 19:51:48.110839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:06.625 [2024-04-24 19:51:48.111804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.625 [2024-04-24 19:51:48.111832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.625 [2024-04-24 19:51:48.122791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:06.625 [2024-04-24 19:51:48.123781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.625 [2024-04-24 19:51:48.123807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.625 [2024-04-24 19:51:48.134783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:06.625 [2024-04-24 19:51:48.135779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.625 [2024-04-24 19:51:48.135810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.146871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:06.884 [2024-04-24 19:51:48.147820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.147847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.158773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:06.884 [2024-04-24 19:51:48.159787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.159815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.170829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:06.884 [2024-04-24 19:51:48.171817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.171844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.182826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:06.884 [2024-04-24 19:51:48.183835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.183863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.194790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:06.884 [2024-04-24 19:51:48.195735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.195763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.206767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:06.884 [2024-04-24 19:51:48.207834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.207863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.218796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:06.884 [2024-04-24 19:51:48.219740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.219768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.230820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:06.884 [2024-04-24 19:51:48.231830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.231858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.242945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:06.884 [2024-04-24 19:51:48.243899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.243926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.254928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:06.884 [2024-04-24 19:51:48.255897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.255926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.266882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:06.884 [2024-04-24 19:51:48.267878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.267906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.278846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:06.884 [2024-04-24 19:51:48.279798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.279826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.290850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:06.884 [2024-04-24 19:51:48.291898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.291926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.302905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:06.884 [2024-04-24 19:51:48.303884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.303913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.314944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:06.884 [2024-04-24 19:51:48.315936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.315964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.326963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:06.884 [2024-04-24 19:51:48.327992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.328019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.339032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:06.884 [2024-04-24 19:51:48.340157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.340184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.351117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:06.884 [2024-04-24 19:51:48.352175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.352203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.363134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:06.884 [2024-04-24 19:51:48.364107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.364134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.375019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:06.884 [2024-04-24 19:51:48.376001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.376029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:06.884 [2024-04-24 19:51:48.387059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:06.884 [2024-04-24 19:51:48.388039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.884 [2024-04-24 19:51:48.388066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.399128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:07.144 [2024-04-24 19:51:48.400193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.400222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.411195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:07.144 [2024-04-24 19:51:48.412149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.412183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.423118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:07.144 [2024-04-24 19:51:48.424118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.424146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.435075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:07.144 [2024-04-24 19:51:48.436061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.436088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.447035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:07.144 [2024-04-24 19:51:48.448040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.448067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.459029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:07.144 [2024-04-24 19:51:48.460043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.460070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.471014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:07.144 [2024-04-24 19:51:48.471962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.471990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.482984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:07.144 [2024-04-24 19:51:48.483934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.483962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.494991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:07.144 [2024-04-24 19:51:48.496021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.496048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.507050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:07.144 [2024-04-24 19:51:48.507985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.508014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.519065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:07.144 [2024-04-24 19:51:48.520095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.520122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.531043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:07.144 [2024-04-24 19:51:48.532053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.532080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.543050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:07.144 [2024-04-24 19:51:48.544062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.544089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.555025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:07.144 [2024-04-24 19:51:48.555987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.556015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.566950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:07.144 [2024-04-24 19:51:48.567945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.567972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.578868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:07.144 [2024-04-24 19:51:48.579882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.579909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.590779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:07.144 [2024-04-24 19:51:48.591782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.591809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.602795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:07.144 [2024-04-24 19:51:48.603820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.603848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.614842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:07.144 [2024-04-24 19:51:48.615818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.615846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.626791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:07.144 [2024-04-24 19:51:48.627799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.627828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.638817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:07.144 [2024-04-24 19:51:48.639831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.144 [2024-04-24 19:51:48.639861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.144 [2024-04-24 19:51:48.650777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:07.145 [2024-04-24 19:51:48.651787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.145 [2024-04-24 19:51:48.651814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.403 [2024-04-24 19:51:48.662863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:07.403 [2024-04-24 19:51:48.663899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.403 [2024-04-24 19:51:48.663927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.403 [2024-04-24 19:51:48.674860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:07.403 [2024-04-24 19:51:48.675872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.403 [2024-04-24 19:51:48.675900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.403 [2024-04-24 19:51:48.686817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:07.403 [2024-04-24 19:51:48.687772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.403 [2024-04-24 19:51:48.687800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.403 [2024-04-24 19:51:48.698824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:07.403 [2024-04-24 19:51:48.699813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.403 [2024-04-24 19:51:48.699840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.403 [2024-04-24 19:51:48.710824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:07.404 [2024-04-24 19:51:48.711832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.711860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.722800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:07.404 [2024-04-24 19:51:48.723809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.723842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.734777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:07.404 [2024-04-24 19:51:48.735791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.735819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.746893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:07.404 [2024-04-24 19:51:48.747931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.747961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.759498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:07.404 [2024-04-24 19:51:48.760543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.760575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.772488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:07.404 [2024-04-24 19:51:48.773575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.773606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.785573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:07.404 [2024-04-24 19:51:48.786626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.786666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.798592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:07.404 [2024-04-24 19:51:48.799651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.799683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.811776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:07.404 [2024-04-24 19:51:48.812893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.812921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.824887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:07.404 [2024-04-24 19:51:48.825926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.825958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.837920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:07.404 [2024-04-24 19:51:48.838989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.839021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.850938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:07.404 [2024-04-24 19:51:48.852050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.852082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.863921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:07.404 [2024-04-24 19:51:48.864961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.864992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.876978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:07.404 [2024-04-24 19:51:48.878024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.878052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.889854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:07.404 [2024-04-24 19:51:48.890876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.890903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.902753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:07.404 [2024-04-24 19:51:48.903775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.903804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.404 [2024-04-24 19:51:48.915649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:07.404 [2024-04-24 19:51:48.916725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.404 [2024-04-24 19:51:48.916767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:48.928536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:07.663 [2024-04-24 19:51:48.929621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:48.929658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:48.941591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:07.663 [2024-04-24 19:51:48.942642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:48.942673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:48.954555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:07.663 [2024-04-24 19:51:48.955626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:48.955664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:48.967551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:07.663 [2024-04-24 19:51:48.968600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:48.968640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:48.980537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:07.663 [2024-04-24 19:51:48.981586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:48.981618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:48.993504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:07.663 [2024-04-24 19:51:48.994555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:48.994586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.006434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:07.663 [2024-04-24 19:51:49.007461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.007489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.019429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:07.663 [2024-04-24 19:51:49.020479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.020510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.032483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:07.663 [2024-04-24 19:51:49.033516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.033547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.045482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:07.663 [2024-04-24 19:51:49.046529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.046560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.058467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:07.663 [2024-04-24 19:51:49.059496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.059533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.071387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:07.663 [2024-04-24 19:51:49.072441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.072472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.084354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:07.663 [2024-04-24 19:51:49.085351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.085382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.097221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:07.663 [2024-04-24 19:51:49.098248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.098279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.110196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:07.663 [2024-04-24 19:51:49.111247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.111278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.123187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:07.663 [2024-04-24 19:51:49.124221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.663 [2024-04-24 19:51:49.124251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.663 [2024-04-24 19:51:49.136105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:07.663 [2024-04-24 19:51:49.137130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-04-24 19:51:49.137161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.664 [2024-04-24 19:51:49.148813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:07.664 [2024-04-24 19:51:49.149869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-04-24 19:51:49.149897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.664 [2024-04-24 19:51:49.161788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:07.664 [2024-04-24 19:51:49.162872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-04-24 19:51:49.162899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.664 [2024-04-24 19:51:49.174701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:07.664 [2024-04-24 19:51:49.175743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.664 [2024-04-24 19:51:49.175777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.187642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:07.923 [2024-04-24 19:51:49.188708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.188735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.200649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:07.923 [2024-04-24 19:51:49.201704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.201733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.213624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e9168 00:21:07.923 [2024-04-24 19:51:49.214709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.214735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.226656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e38d0 00:21:07.923 [2024-04-24 19:51:49.227743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.227771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.239701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f5be8 00:21:07.923 [2024-04-24 19:51:49.240765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.240793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.252757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f8e88 00:21:07.923 [2024-04-24 19:51:49.253812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.253856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.265729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ee190 00:21:07.923 [2024-04-24 19:51:49.266761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.266788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.278748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190eaef0 00:21:07.923 [2024-04-24 19:51:49.279744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.279772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.291702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190e1b48 00:21:07.923 [2024-04-24 19:51:49.292761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.292788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.304645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f46d0 00:21:07.923 [2024-04-24 19:51:49.305731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.305758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.317712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190f7970 00:21:07.923 [2024-04-24 19:51:49.318777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.318805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.330696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190fac10 00:21:07.923 [2024-04-24 19:51:49.331742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.331769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 [2024-04-24 19:51:49.343653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15174c0) with pdu=0x2000190ec408 00:21:07.923 [2024-04-24 19:51:49.344738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:07.923 [2024-04-24 19:51:49.344767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:07.923 00:21:07.923 Latency(us) 00:21:07.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.923 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:07.923 nvme0n1 : 2.01 20372.50 79.58 0.00 0.00 6272.37 2924.85 12621.75 00:21:07.923 =================================================================================================================== 00:21:07.923 Total : 20372.50 79.58 0.00 0.00 6272.37 2924.85 12621.75 00:21:07.923 0 00:21:07.923 19:51:49 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:07.923 19:51:49 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:07.923 19:51:49 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:07.923 | .driver_specific 00:21:07.923 | .nvme_error 00:21:07.923 | .status_code 00:21:07.923 | .command_transient_transport_error' 00:21:07.923 19:51:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:08.181 19:51:49 -- host/digest.sh@71 -- # (( 160 > 0 )) 00:21:08.181 19:51:49 -- host/digest.sh@73 -- # killprocess 1772538 00:21:08.181 19:51:49 -- common/autotest_common.sh@936 -- # '[' -z 1772538 ']' 00:21:08.181 19:51:49 -- common/autotest_common.sh@940 -- # kill -0 1772538 00:21:08.181 19:51:49 -- common/autotest_common.sh@941 -- # uname 00:21:08.181 19:51:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:08.181 19:51:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1772538 00:21:08.182 19:51:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:08.182 19:51:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:08.182 19:51:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1772538' 00:21:08.182 killing process with pid 1772538 00:21:08.182 19:51:49 -- common/autotest_common.sh@955 -- # kill 1772538 00:21:08.182 Received shutdown signal, test time was about 2.000000 seconds 00:21:08.182 00:21:08.182 Latency(us) 00:21:08.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.182 =================================================================================================================== 00:21:08.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.182 19:51:49 -- common/autotest_common.sh@960 -- # wait 1772538 00:21:08.440 19:51:49 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:08.440 19:51:49 -- host/digest.sh@54 -- # local rw bs qd 00:21:08.440 19:51:49 -- host/digest.sh@56 -- # rw=randwrite 00:21:08.440 19:51:49 -- host/digest.sh@56 -- # bs=131072 00:21:08.440 19:51:49 -- host/digest.sh@56 -- # qd=16 00:21:08.440 19:51:49 -- host/digest.sh@58 -- # bperfpid=1772951 00:21:08.440 19:51:49 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:08.440 19:51:49 -- host/digest.sh@60 -- # waitforlisten 1772951 /var/tmp/bperf.sock 00:21:08.440 19:51:49 -- common/autotest_common.sh@817 -- # '[' -z 1772951 ']' 00:21:08.440 19:51:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:08.440 19:51:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:08.440 19:51:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:08.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:08.440 19:51:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:08.440 19:51:49 -- common/autotest_common.sh@10 -- # set +x 00:21:08.440 [2024-04-24 19:51:49.951695] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:08.440 [2024-04-24 19:51:49.951779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772951 ] 00:21:08.440 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:08.440 Zero copy mechanism will not be used. 00:21:08.699 EAL: No free 2048 kB hugepages reported on node 1 00:21:08.699 [2024-04-24 19:51:50.015753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.699 [2024-04-24 19:51:50.125963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.956 19:51:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:08.957 19:51:50 -- common/autotest_common.sh@850 -- # return 0 00:21:08.957 19:51:50 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:08.957 19:51:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:09.215 19:51:50 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:09.215 19:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.215 19:51:50 -- common/autotest_common.sh@10 -- # set +x 00:21:09.215 19:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.215 19:51:50 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:09.215 19:51:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:09.473 nvme0n1 00:21:09.473 19:51:50 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:09.473 19:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.473 19:51:50 -- common/autotest_common.sh@10 -- # set +x 00:21:09.473 19:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.473 19:51:50 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:09.473 19:51:50 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:09.733 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:09.733 Zero copy mechanism will not be used. 00:21:09.733 Running I/O for 2 seconds... 00:21:09.733 [2024-04-24 19:51:51.042585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.043141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.043194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.062791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.063426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.063460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.083579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.084264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.084297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.103191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.103798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.103825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.124135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.124841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.124884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.144398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.144800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.144828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.163725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.164208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.164253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.182265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.182870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.182898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.202153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.202710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.202753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.221688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.222084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.222125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.733 [2024-04-24 19:51:51.242426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.733 [2024-04-24 19:51:51.242867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.733 [2024-04-24 19:51:51.242893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.992 [2024-04-24 19:51:51.262203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.992 [2024-04-24 19:51:51.262814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.992 [2024-04-24 19:51:51.262862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.992 [2024-04-24 19:51:51.279386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.992 [2024-04-24 19:51:51.279814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.992 [2024-04-24 19:51:51.279841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.992 [2024-04-24 19:51:51.299622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.992 [2024-04-24 19:51:51.300028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.992 [2024-04-24 19:51:51.300071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.992 [2024-04-24 19:51:51.319676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.992 [2024-04-24 19:51:51.319996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.992 [2024-04-24 19:51:51.320024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.992 [2024-04-24 19:51:51.340261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.992 [2024-04-24 19:51:51.340678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.992 [2024-04-24 19:51:51.340706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.992 [2024-04-24 19:51:51.361096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.992 [2024-04-24 19:51:51.361641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.992 [2024-04-24 19:51:51.361667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.992 [2024-04-24 19:51:51.380430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.992 [2024-04-24 19:51:51.380939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.992 [2024-04-24 19:51:51.380967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.992 [2024-04-24 19:51:51.400748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.993 [2024-04-24 19:51:51.401154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.993 [2024-04-24 19:51:51.401180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.993 [2024-04-24 19:51:51.421421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.993 [2024-04-24 19:51:51.421913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.993 [2024-04-24 19:51:51.421941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:09.993 [2024-04-24 19:51:51.441762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.993 [2024-04-24 19:51:51.442198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.993 [2024-04-24 19:51:51.442243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:09.993 [2024-04-24 19:51:51.461462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.993 [2024-04-24 19:51:51.462027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.993 [2024-04-24 19:51:51.462071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:09.993 [2024-04-24 19:51:51.481429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.993 [2024-04-24 19:51:51.481945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.993 [2024-04-24 19:51:51.481971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:09.993 [2024-04-24 19:51:51.502374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:09.993 [2024-04-24 19:51:51.502859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.993 [2024-04-24 19:51:51.502903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.251 [2024-04-24 19:51:51.522184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.251 [2024-04-24 19:51:51.522578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.251 [2024-04-24 19:51:51.522606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.251 [2024-04-24 19:51:51.542101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.251 [2024-04-24 19:51:51.542694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.251 [2024-04-24 19:51:51.542741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.251 [2024-04-24 19:51:51.564519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.251 [2024-04-24 19:51:51.565174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.251 [2024-04-24 19:51:51.565218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.251 [2024-04-24 19:51:51.582557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.251 [2024-04-24 19:51:51.583001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.251 [2024-04-24 19:51:51.583042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.251 [2024-04-24 19:51:51.601272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.251 [2024-04-24 19:51:51.601683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.251 [2024-04-24 19:51:51.601711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.251 [2024-04-24 19:51:51.621534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.251 [2024-04-24 19:51:51.621951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.251 [2024-04-24 19:51:51.621978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.251 [2024-04-24 19:51:51.642246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.251 [2024-04-24 19:51:51.642796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.251 [2024-04-24 19:51:51.642824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.252 [2024-04-24 19:51:51.662051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.252 [2024-04-24 19:51:51.662430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.252 [2024-04-24 19:51:51.662472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.252 [2024-04-24 19:51:51.682623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.252 [2024-04-24 19:51:51.683119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.252 [2024-04-24 19:51:51.683145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.252 [2024-04-24 19:51:51.703139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.252 [2024-04-24 19:51:51.703543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.252 [2024-04-24 19:51:51.703585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.252 [2024-04-24 19:51:51.722693] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.252 [2024-04-24 19:51:51.723095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.252 [2024-04-24 19:51:51.723121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.252 [2024-04-24 19:51:51.740772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.252 [2024-04-24 19:51:51.741255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.252 [2024-04-24 19:51:51.741287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.252 [2024-04-24 19:51:51.760673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.252 [2024-04-24 19:51:51.761316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.252 [2024-04-24 19:51:51.761361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.782279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.782779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.782822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.803644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.804039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.804066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.824050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.824691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.824733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.847557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.848071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.848114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.866448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.866930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.866975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.886039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.886524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.886549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.907498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.908014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.908040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.925178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.925556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.925599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.943710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.944089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.944132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.961165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.961539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.961584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:51.982309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:51.982830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:51.982872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:52.002963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:52.003404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.510 [2024-04-24 19:51:52.003445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.510 [2024-04-24 19:51:52.023487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.510 [2024-04-24 19:51:52.024003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.511 [2024-04-24 19:51:52.024032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.043894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.044425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.044466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.065044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.065553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.065600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.086700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.087164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.087205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.105400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.105852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.105893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.125453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.125895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.125921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.144294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.144847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.144889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.163878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.164412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.164438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.187068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.187574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.187600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.209533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.210036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.210081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.228992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.229538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.769 [2024-04-24 19:51:52.229589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:10.769 [2024-04-24 19:51:52.248702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.769 [2024-04-24 19:51:52.249189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.770 [2024-04-24 19:51:52.249234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.770 [2024-04-24 19:51:52.267944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:10.770 [2024-04-24 19:51:52.268345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.770 [2024-04-24 19:51:52.268372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.286930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.287320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.287348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.306322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.306797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.306839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.323251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.323653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.323681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.339420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.339859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.339886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.358450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.358946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.358973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.379722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.380100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.380142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.398362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.398843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.398885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.417793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.418175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.418219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.438740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.439340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.439365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.459530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.460158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.460200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.481108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.481638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.481666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.501696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.502233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.502277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.028 [2024-04-24 19:51:52.522813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.028 [2024-04-24 19:51:52.523470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.028 [2024-04-24 19:51:52.523496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.286 [2024-04-24 19:51:52.544497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.286 [2024-04-24 19:51:52.544966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.286 [2024-04-24 19:51:52.544995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.286 [2024-04-24 19:51:52.565013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.286 [2024-04-24 19:51:52.565555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.286 [2024-04-24 19:51:52.565600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.286 [2024-04-24 19:51:52.586345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.286 [2024-04-24 19:51:52.586874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.286 [2024-04-24 19:51:52.586921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.286 [2024-04-24 19:51:52.606801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.286 [2024-04-24 19:51:52.607183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.286 [2024-04-24 19:51:52.607211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.286 [2024-04-24 19:51:52.627280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.286 [2024-04-24 19:51:52.627693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.286 [2024-04-24 19:51:52.627735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.286 [2024-04-24 19:51:52.647563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.287 [2024-04-24 19:51:52.648117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.287 [2024-04-24 19:51:52.648160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.287 [2024-04-24 19:51:52.668605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.287 [2024-04-24 19:51:52.669195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.287 [2024-04-24 19:51:52.669239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.287 [2024-04-24 19:51:52.689064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.287 [2024-04-24 19:51:52.689600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.287 [2024-04-24 19:51:52.689653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.287 [2024-04-24 19:51:52.706067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.287 [2024-04-24 19:51:52.706500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.287 [2024-04-24 19:51:52.706532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.287 [2024-04-24 19:51:52.724716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.287 [2024-04-24 19:51:52.725198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.287 [2024-04-24 19:51:52.725225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.287 [2024-04-24 19:51:52.744753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.287 [2024-04-24 19:51:52.745158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.287 [2024-04-24 19:51:52.745201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.287 [2024-04-24 19:51:52.765197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.287 [2024-04-24 19:51:52.765614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.287 [2024-04-24 19:51:52.765662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.287 [2024-04-24 19:51:52.784645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.287 [2024-04-24 19:51:52.785240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.287 [2024-04-24 19:51:52.785267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.805224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.805650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.805676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.825394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.825809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.825837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.843837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.844287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.844329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.860861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.861264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.861308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.879293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.879877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.879905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.900929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.901465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.901508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.919532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.920052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.920101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.938951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.939327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.939369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.959086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.959705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.959732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:52.979881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:52.980374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:52.980420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:53.000917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:53.001437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:53.001479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.546 [2024-04-24 19:51:53.018583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15179a0) with pdu=0x2000190fef90 00:21:11.546 [2024-04-24 19:51:53.019081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.546 [2024-04-24 19:51:53.019122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.546 00:21:11.546 Latency(us) 00:21:11.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.546 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:11.546 nvme0n1 : 2.01 1549.23 193.65 0.00 0.00 10297.09 7573.05 23592.96 00:21:11.546 =================================================================================================================== 00:21:11.546 Total : 1549.23 193.65 0.00 0.00 10297.09 7573.05 23592.96 00:21:11.546 0 00:21:11.546 19:51:53 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:11.546 19:51:53 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:11.546 19:51:53 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:11.546 | .driver_specific 00:21:11.546 | .nvme_error 00:21:11.546 | .status_code 00:21:11.546 | .command_transient_transport_error' 00:21:11.546 19:51:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:11.804 19:51:53 -- host/digest.sh@71 -- # (( 100 > 0 )) 00:21:11.804 19:51:53 -- host/digest.sh@73 -- # killprocess 1772951 00:21:11.804 19:51:53 -- common/autotest_common.sh@936 -- # '[' -z 1772951 ']' 00:21:11.804 19:51:53 -- common/autotest_common.sh@940 -- # kill -0 1772951 00:21:11.804 19:51:53 -- common/autotest_common.sh@941 -- # uname 00:21:11.804 19:51:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:11.804 19:51:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1772951 00:21:11.804 19:51:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:11.804 19:51:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:11.804 19:51:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1772951' 00:21:11.804 killing process with pid 1772951 00:21:11.804 19:51:53 -- common/autotest_common.sh@955 -- # kill 1772951 00:21:11.804 Received shutdown signal, test time was about 2.000000 seconds 00:21:11.804 00:21:11.804 Latency(us) 00:21:11.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.804 =================================================================================================================== 00:21:11.804 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.063 19:51:53 -- common/autotest_common.sh@960 -- # wait 1772951 00:21:12.063 19:51:53 -- host/digest.sh@116 -- # killprocess 1771558 00:21:12.063 19:51:53 -- common/autotest_common.sh@936 -- # '[' -z 1771558 ']' 00:21:12.063 19:51:53 -- common/autotest_common.sh@940 -- # kill -0 1771558 00:21:12.063 19:51:53 -- common/autotest_common.sh@941 -- # uname 00:21:12.063 19:51:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:12.321 19:51:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1771558 00:21:12.321 19:51:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:12.321 19:51:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:12.321 19:51:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1771558' 00:21:12.321 killing process with pid 1771558 00:21:12.321 19:51:53 -- common/autotest_common.sh@955 -- # kill 1771558 00:21:12.321 19:51:53 -- common/autotest_common.sh@960 -- # wait 1771558 00:21:12.579 00:21:12.579 real 0m16.146s 00:21:12.579 user 0m32.159s 00:21:12.579 sys 0m3.738s 00:21:12.579 19:51:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:12.579 19:51:53 -- common/autotest_common.sh@10 -- # set +x 00:21:12.579 ************************************ 00:21:12.579 END TEST nvmf_digest_error 00:21:12.579 ************************************ 00:21:12.579 19:51:53 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:12.579 19:51:53 -- host/digest.sh@150 -- # nvmftestfini 00:21:12.579 19:51:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:12.579 19:51:53 -- nvmf/common.sh@117 -- # sync 00:21:12.579 19:51:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.579 19:51:53 -- nvmf/common.sh@120 -- # set +e 00:21:12.579 19:51:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.579 19:51:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.579 rmmod nvme_tcp 00:21:12.579 rmmod nvme_fabrics 00:21:12.579 rmmod nvme_keyring 00:21:12.579 19:51:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.579 19:51:53 -- nvmf/common.sh@124 -- # set -e 00:21:12.579 19:51:53 -- nvmf/common.sh@125 -- # return 0 00:21:12.579 19:51:53 -- nvmf/common.sh@478 -- # '[' -n 1771558 ']' 00:21:12.580 19:51:53 -- nvmf/common.sh@479 -- # killprocess 1771558 00:21:12.580 19:51:53 -- common/autotest_common.sh@936 -- # '[' -z 1771558 ']' 00:21:12.580 19:51:53 -- common/autotest_common.sh@940 -- # kill -0 1771558 00:21:12.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1771558) - No such process 00:21:12.580 19:51:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1771558 is not found' 00:21:12.580 Process with pid 1771558 is not found 00:21:12.580 19:51:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:12.580 19:51:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:12.580 19:51:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:12.580 19:51:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.580 19:51:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.580 19:51:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.580 19:51:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.580 19:51:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.480 19:51:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.480 00:21:14.480 real 0m36.333s 00:21:14.480 user 1m3.441s 00:21:14.480 sys 0m9.476s 00:21:14.480 19:51:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:14.480 19:51:55 -- common/autotest_common.sh@10 -- # set +x 00:21:14.480 ************************************ 00:21:14.480 END TEST nvmf_digest 00:21:14.480 ************************************ 00:21:14.739 19:51:56 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:21:14.739 19:51:56 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:21:14.739 19:51:56 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:21:14.739 19:51:56 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:14.739 19:51:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:14.739 19:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:14.739 19:51:56 -- common/autotest_common.sh@10 -- # set +x 00:21:14.739 ************************************ 00:21:14.739 START TEST nvmf_bdevperf 00:21:14.739 ************************************ 00:21:14.739 19:51:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:14.739 * Looking for test storage... 00:21:14.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:14.739 19:51:56 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.739 19:51:56 -- nvmf/common.sh@7 -- # uname -s 00:21:14.739 19:51:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.739 19:51:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.739 19:51:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.739 19:51:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.739 19:51:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.739 19:51:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.739 19:51:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.739 19:51:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.739 19:51:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.739 19:51:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.739 19:51:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.739 19:51:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.739 19:51:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.739 19:51:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.739 19:51:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.739 19:51:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.739 19:51:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.739 19:51:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.739 19:51:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.739 19:51:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.739 19:51:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.739 19:51:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.739 19:51:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.739 19:51:56 -- paths/export.sh@5 -- # export PATH 00:21:14.739 19:51:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.739 19:51:56 -- nvmf/common.sh@47 -- # : 0 00:21:14.739 19:51:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.739 19:51:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.739 19:51:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.739 19:51:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.739 19:51:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.739 19:51:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.739 19:51:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.739 19:51:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.739 19:51:56 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:14.739 19:51:56 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:14.739 19:51:56 -- host/bdevperf.sh@24 -- # nvmftestinit 00:21:14.739 19:51:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:14.739 19:51:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.739 19:51:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:14.739 19:51:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:14.739 19:51:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:14.739 19:51:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.739 19:51:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.739 19:51:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.739 19:51:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:14.739 19:51:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:14.739 19:51:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.739 19:51:56 -- common/autotest_common.sh@10 -- # set +x 00:21:16.675 19:51:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:16.675 19:51:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.675 19:51:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.675 19:51:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.675 19:51:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.675 19:51:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.675 19:51:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.675 19:51:58 -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.675 19:51:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.675 19:51:58 -- nvmf/common.sh@296 -- # e810=() 00:21:16.675 19:51:58 -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.675 19:51:58 -- nvmf/common.sh@297 -- # x722=() 00:21:16.675 19:51:58 -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.675 19:51:58 -- nvmf/common.sh@298 -- # mlx=() 00:21:16.675 19:51:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.675 19:51:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.675 19:51:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.675 19:51:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.675 19:51:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.675 19:51:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.675 19:51:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.675 19:51:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.675 19:51:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.675 19:51:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.676 19:51:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.676 19:51:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.676 19:51:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.676 19:51:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:16.676 19:51:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.676 19:51:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.676 19:51:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:16.676 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:16.676 19:51:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.676 19:51:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:16.676 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:16.676 19:51:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.676 19:51:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.676 19:51:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.676 19:51:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:16.676 19:51:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.676 19:51:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:16.676 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:16.676 19:51:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.676 19:51:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.676 19:51:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.676 19:51:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:16.676 19:51:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.676 19:51:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:16.676 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:16.676 19:51:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.676 19:51:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:16.676 19:51:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:16.676 19:51:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:16.676 19:51:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:16.676 19:51:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.676 19:51:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.676 19:51:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.676 19:51:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:16.676 19:51:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.676 19:51:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.676 19:51:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:16.676 19:51:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.676 19:51:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.676 19:51:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:16.676 19:51:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:16.676 19:51:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.676 19:51:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.936 19:51:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.936 19:51:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.936 19:51:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:16.936 19:51:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.936 19:51:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.936 19:51:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.936 19:51:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:16.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:21:16.936 00:21:16.936 --- 10.0.0.2 ping statistics --- 00:21:16.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.936 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:16.936 19:51:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:21:16.936 00:21:16.936 --- 10.0.0.1 ping statistics --- 00:21:16.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.936 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:21:16.936 19:51:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.936 19:51:58 -- nvmf/common.sh@411 -- # return 0 00:21:16.936 19:51:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:16.936 19:51:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.936 19:51:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:16.936 19:51:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:16.936 19:51:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.936 19:51:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:16.936 19:51:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:16.936 19:51:58 -- host/bdevperf.sh@25 -- # tgt_init 00:21:16.936 19:51:58 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:16.936 19:51:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:16.936 19:51:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:16.936 19:51:58 -- common/autotest_common.sh@10 -- # set +x 00:21:16.936 19:51:58 -- nvmf/common.sh@470 -- # nvmfpid=1775410 00:21:16.936 19:51:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:16.936 19:51:58 -- nvmf/common.sh@471 -- # waitforlisten 1775410 00:21:16.936 19:51:58 -- common/autotest_common.sh@817 -- # '[' -z 1775410 ']' 00:21:16.936 19:51:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.936 19:51:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:16.936 19:51:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.936 19:51:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:16.936 19:51:58 -- common/autotest_common.sh@10 -- # set +x 00:21:16.936 [2024-04-24 19:51:58.366356] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:16.936 [2024-04-24 19:51:58.366439] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.936 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.936 [2024-04-24 19:51:58.435507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:17.195 [2024-04-24 19:51:58.556516] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.195 [2024-04-24 19:51:58.556586] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.195 [2024-04-24 19:51:58.556602] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.195 [2024-04-24 19:51:58.556615] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.195 [2024-04-24 19:51:58.556635] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.195 [2024-04-24 19:51:58.556753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.195 [2024-04-24 19:51:58.556848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.195 [2024-04-24 19:51:58.556851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.195 19:51:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:17.195 19:51:58 -- common/autotest_common.sh@850 -- # return 0 00:21:17.195 19:51:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:17.195 19:51:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:17.195 19:51:58 -- common/autotest_common.sh@10 -- # set +x 00:21:17.195 19:51:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.195 19:51:58 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:17.195 19:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.195 19:51:58 -- common/autotest_common.sh@10 -- # set +x 00:21:17.195 [2024-04-24 19:51:58.690785] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.195 19:51:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.195 19:51:58 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:17.195 19:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.195 19:51:58 -- common/autotest_common.sh@10 -- # set +x 00:21:17.454 Malloc0 00:21:17.454 19:51:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.454 19:51:58 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:17.454 19:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.454 19:51:58 -- common/autotest_common.sh@10 -- # set +x 00:21:17.454 19:51:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.454 19:51:58 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:17.454 19:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.454 19:51:58 -- common/autotest_common.sh@10 -- # set +x 00:21:17.454 19:51:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.454 19:51:58 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.454 19:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.454 19:51:58 -- common/autotest_common.sh@10 -- # set +x 00:21:17.454 [2024-04-24 19:51:58.755112] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.454 19:51:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.454 19:51:58 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:21:17.454 19:51:58 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:21:17.454 19:51:58 -- nvmf/common.sh@521 -- # config=() 00:21:17.454 19:51:58 -- nvmf/common.sh@521 -- # local subsystem config 00:21:17.454 19:51:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:17.454 19:51:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:17.454 { 00:21:17.454 "params": { 00:21:17.454 "name": "Nvme$subsystem", 00:21:17.454 "trtype": "$TEST_TRANSPORT", 00:21:17.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.454 "adrfam": "ipv4", 00:21:17.454 "trsvcid": "$NVMF_PORT", 00:21:17.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.454 "hdgst": ${hdgst:-false}, 00:21:17.454 "ddgst": ${ddgst:-false} 00:21:17.454 }, 00:21:17.454 "method": "bdev_nvme_attach_controller" 00:21:17.454 } 00:21:17.454 EOF 00:21:17.454 )") 00:21:17.454 19:51:58 -- nvmf/common.sh@543 -- # cat 00:21:17.454 19:51:58 -- nvmf/common.sh@545 -- # jq . 00:21:17.454 19:51:58 -- nvmf/common.sh@546 -- # IFS=, 00:21:17.454 19:51:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:17.454 "params": { 00:21:17.454 "name": "Nvme1", 00:21:17.454 "trtype": "tcp", 00:21:17.454 "traddr": "10.0.0.2", 00:21:17.454 "adrfam": "ipv4", 00:21:17.454 "trsvcid": "4420", 00:21:17.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.454 "hdgst": false, 00:21:17.454 "ddgst": false 00:21:17.454 }, 00:21:17.454 "method": "bdev_nvme_attach_controller" 00:21:17.454 }' 00:21:17.454 [2024-04-24 19:51:58.798691] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:17.454 [2024-04-24 19:51:58.798768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775448 ] 00:21:17.454 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.454 [2024-04-24 19:51:58.858688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.454 [2024-04-24 19:51:58.966660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.713 Running I/O for 1 seconds... 00:21:19.088 00:21:19.088 Latency(us) 00:21:19.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.088 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:19.088 Verification LBA range: start 0x0 length 0x4000 00:21:19.088 Nvme1n1 : 1.01 8763.54 34.23 0.00 0.00 14531.11 3009.80 15922.82 00:21:19.088 =================================================================================================================== 00:21:19.088 Total : 8763.54 34.23 0.00 0.00 14531.11 3009.80 15922.82 00:21:19.088 19:52:00 -- host/bdevperf.sh@30 -- # bdevperfpid=1775706 00:21:19.088 19:52:00 -- host/bdevperf.sh@32 -- # sleep 3 00:21:19.088 19:52:00 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:21:19.088 19:52:00 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:21:19.088 19:52:00 -- nvmf/common.sh@521 -- # config=() 00:21:19.088 19:52:00 -- nvmf/common.sh@521 -- # local subsystem config 00:21:19.088 19:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:19.088 19:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:19.088 { 00:21:19.088 "params": { 00:21:19.088 "name": "Nvme$subsystem", 00:21:19.088 "trtype": "$TEST_TRANSPORT", 00:21:19.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:19.088 "adrfam": "ipv4", 00:21:19.088 "trsvcid": "$NVMF_PORT", 00:21:19.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:19.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:19.088 "hdgst": ${hdgst:-false}, 00:21:19.088 "ddgst": ${ddgst:-false} 00:21:19.088 }, 00:21:19.088 "method": "bdev_nvme_attach_controller" 00:21:19.088 } 00:21:19.088 EOF 00:21:19.088 )") 00:21:19.088 19:52:00 -- nvmf/common.sh@543 -- # cat 00:21:19.088 19:52:00 -- nvmf/common.sh@545 -- # jq . 00:21:19.088 19:52:00 -- nvmf/common.sh@546 -- # IFS=, 00:21:19.088 19:52:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:19.088 "params": { 00:21:19.088 "name": "Nvme1", 00:21:19.088 "trtype": "tcp", 00:21:19.088 "traddr": "10.0.0.2", 00:21:19.088 "adrfam": "ipv4", 00:21:19.088 "trsvcid": "4420", 00:21:19.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.088 "hdgst": false, 00:21:19.088 "ddgst": false 00:21:19.088 }, 00:21:19.088 "method": "bdev_nvme_attach_controller" 00:21:19.088 }' 00:21:19.088 [2024-04-24 19:52:00.467094] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:19.088 [2024-04-24 19:52:00.467175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775706 ] 00:21:19.088 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.088 [2024-04-24 19:52:00.529266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.346 [2024-04-24 19:52:00.637670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.603 Running I/O for 15 seconds... 00:21:22.135 19:52:03 -- host/bdevperf.sh@33 -- # kill -9 1775410 00:21:22.135 19:52:03 -- host/bdevperf.sh@35 -- # sleep 3 00:21:22.135 [2024-04-24 19:52:03.438441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.438498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.438554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.438593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.438635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.438688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.438720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.438755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.438802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.438834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.438866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.438897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.438948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.438963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.438994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.439027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.439060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.439092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.439125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.439158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.439191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.439227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.135 [2024-04-24 19:52:03.439260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.439292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.439325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.439358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.439390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.439424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.135 [2024-04-24 19:52:03.439440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.135 [2024-04-24 19:52:03.439455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.136 [2024-04-24 19:52:03.439554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.439979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.439997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:22.136 [2024-04-24 19:52:03.440148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.136 [2024-04-24 19:52:03.440795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.136 [2024-04-24 19:52:03.440810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.440825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.440839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.440855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.440869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.440884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.440898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.440930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.440947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.440974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.440990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.441955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.441968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.442001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.442017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.442034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.442049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.442066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.442081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.442099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.442114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.137 [2024-04-24 19:52:03.442134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-04-24 19:52:03.442150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-04-24 19:52:03.442929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.442944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11100 is same with the state(5) to be set 00:21:22.138 [2024-04-24 19:52:03.442962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:22.138 [2024-04-24 19:52:03.442973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:22.138 [2024-04-24 19:52:03.443001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39352 len:8 PRP1 0x0 PRP2 0x0 00:21:22.138 [2024-04-24 19:52:03.443022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:22.138 [2024-04-24 19:52:03.443103] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e11100 was disconnected and freed. reset controller. 00:21:22.138 [2024-04-24 19:52:03.446957] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.138 [2024-04-24 19:52:03.447045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.138 [2024-04-24 19:52:03.447795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.138 [2024-04-24 19:52:03.447980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.138 [2024-04-24 19:52:03.448010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.138 [2024-04-24 19:52:03.448028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.138 [2024-04-24 19:52:03.448267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.138 [2024-04-24 19:52:03.448510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.138 [2024-04-24 19:52:03.448539] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.138 [2024-04-24 19:52:03.448559] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.138 [2024-04-24 19:52:03.452172] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.138 [2024-04-24 19:52:03.461241] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.138 [2024-04-24 19:52:03.461717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.138 [2024-04-24 19:52:03.461920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.138 [2024-04-24 19:52:03.461947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.138 [2024-04-24 19:52:03.461964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.138 [2024-04-24 19:52:03.462217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.138 [2024-04-24 19:52:03.462460] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.138 [2024-04-24 19:52:03.462485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.138 [2024-04-24 19:52:03.462501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.138 [2024-04-24 19:52:03.466086] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.138 [2024-04-24 19:52:03.475137] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.138 [2024-04-24 19:52:03.475615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.138 [2024-04-24 19:52:03.475860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.138 [2024-04-24 19:52:03.475890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.138 [2024-04-24 19:52:03.475908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.138 [2024-04-24 19:52:03.476146] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.138 [2024-04-24 19:52:03.476389] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.138 [2024-04-24 19:52:03.476413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.138 [2024-04-24 19:52:03.476430] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.138 [2024-04-24 19:52:03.480002] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.138 [2024-04-24 19:52:03.489035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.138 [2024-04-24 19:52:03.489473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.489713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.489745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.139 [2024-04-24 19:52:03.489764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.139 [2024-04-24 19:52:03.490002] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.139 [2024-04-24 19:52:03.490246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.139 [2024-04-24 19:52:03.490270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.139 [2024-04-24 19:52:03.490293] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.139 [2024-04-24 19:52:03.493868] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.139 [2024-04-24 19:52:03.502913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.139 [2024-04-24 19:52:03.503357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.503568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.503598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.139 [2024-04-24 19:52:03.503616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.139 [2024-04-24 19:52:03.503872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.139 [2024-04-24 19:52:03.504116] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.139 [2024-04-24 19:52:03.504142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.139 [2024-04-24 19:52:03.504158] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.139 [2024-04-24 19:52:03.507727] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.139 [2024-04-24 19:52:03.516748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.139 [2024-04-24 19:52:03.517215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.517495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.517524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.139 [2024-04-24 19:52:03.517542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.139 [2024-04-24 19:52:03.517800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.139 [2024-04-24 19:52:03.518045] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.139 [2024-04-24 19:52:03.518071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.139 [2024-04-24 19:52:03.518087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.139 [2024-04-24 19:52:03.521654] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.139 [2024-04-24 19:52:03.530729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.139 [2024-04-24 19:52:03.531199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.531436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.531485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.139 [2024-04-24 19:52:03.531504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.139 [2024-04-24 19:52:03.531761] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.139 [2024-04-24 19:52:03.532006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.139 [2024-04-24 19:52:03.532032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.139 [2024-04-24 19:52:03.532047] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.139 [2024-04-24 19:52:03.535609] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.139 [2024-04-24 19:52:03.544678] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.139 [2024-04-24 19:52:03.545328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.545734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.545765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.139 [2024-04-24 19:52:03.545783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.139 [2024-04-24 19:52:03.546021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.139 [2024-04-24 19:52:03.546264] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.139 [2024-04-24 19:52:03.546288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.139 [2024-04-24 19:52:03.546303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.139 [2024-04-24 19:52:03.549882] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.139 [2024-04-24 19:52:03.558518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.139 [2024-04-24 19:52:03.558997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.559186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.559214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.139 [2024-04-24 19:52:03.559232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.139 [2024-04-24 19:52:03.559469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.139 [2024-04-24 19:52:03.559731] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.139 [2024-04-24 19:52:03.559757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.139 [2024-04-24 19:52:03.559778] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.139 [2024-04-24 19:52:03.563333] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.139 [2024-04-24 19:52:03.572418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.139 [2024-04-24 19:52:03.572901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.573134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.573159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.139 [2024-04-24 19:52:03.573190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.139 [2024-04-24 19:52:03.573435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.139 [2024-04-24 19:52:03.573703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.139 [2024-04-24 19:52:03.573724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.139 [2024-04-24 19:52:03.573736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.139 [2024-04-24 19:52:03.577290] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.139 [2024-04-24 19:52:03.586356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.139 [2024-04-24 19:52:03.586836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.587167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.139 [2024-04-24 19:52:03.587218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.139 [2024-04-24 19:52:03.587235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.139 [2024-04-24 19:52:03.587473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.140 [2024-04-24 19:52:03.587729] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.140 [2024-04-24 19:52:03.587755] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.140 [2024-04-24 19:52:03.587770] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.140 [2024-04-24 19:52:03.591343] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.140 [2024-04-24 19:52:03.600223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.140 [2024-04-24 19:52:03.600695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.140 [2024-04-24 19:52:03.600876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.140 [2024-04-24 19:52:03.600906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.140 [2024-04-24 19:52:03.600924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.140 [2024-04-24 19:52:03.601162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.140 [2024-04-24 19:52:03.601405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.140 [2024-04-24 19:52:03.601429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.140 [2024-04-24 19:52:03.601444] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.140 [2024-04-24 19:52:03.605021] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.140 [2024-04-24 19:52:03.614087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.140 [2024-04-24 19:52:03.614525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.140 [2024-04-24 19:52:03.614757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.140 [2024-04-24 19:52:03.614788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.140 [2024-04-24 19:52:03.614806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.140 [2024-04-24 19:52:03.615045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.140 [2024-04-24 19:52:03.615288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.140 [2024-04-24 19:52:03.615311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.140 [2024-04-24 19:52:03.615328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.140 [2024-04-24 19:52:03.618904] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.140 [2024-04-24 19:52:03.627963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.140 [2024-04-24 19:52:03.628415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.140 [2024-04-24 19:52:03.628647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.140 [2024-04-24 19:52:03.628690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.140 [2024-04-24 19:52:03.628708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.140 [2024-04-24 19:52:03.628946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.140 [2024-04-24 19:52:03.629188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.140 [2024-04-24 19:52:03.629212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.140 [2024-04-24 19:52:03.629227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.140 [2024-04-24 19:52:03.632815] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.140 [2024-04-24 19:52:03.641872] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.140 [2024-04-24 19:52:03.642318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.140 [2024-04-24 19:52:03.642689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.140 [2024-04-24 19:52:03.642722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.140 [2024-04-24 19:52:03.642740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.140 [2024-04-24 19:52:03.642979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.140 [2024-04-24 19:52:03.643221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.140 [2024-04-24 19:52:03.643245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.140 [2024-04-24 19:52:03.643260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.400 [2024-04-24 19:52:03.646855] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.400 [2024-04-24 19:52:03.655733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.400 [2024-04-24 19:52:03.656249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.656580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.656609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.400 [2024-04-24 19:52:03.656636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.400 [2024-04-24 19:52:03.656876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.400 [2024-04-24 19:52:03.657119] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.400 [2024-04-24 19:52:03.657143] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.400 [2024-04-24 19:52:03.657159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.400 [2024-04-24 19:52:03.660745] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.400 [2024-04-24 19:52:03.669583] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.400 [2024-04-24 19:52:03.670035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.670338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.670395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.400 [2024-04-24 19:52:03.670415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.400 [2024-04-24 19:52:03.670672] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.400 [2024-04-24 19:52:03.670929] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.400 [2024-04-24 19:52:03.670954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.400 [2024-04-24 19:52:03.670969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.400 [2024-04-24 19:52:03.674522] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.400 [2024-04-24 19:52:03.683590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.400 [2024-04-24 19:52:03.684081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.684380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.684410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.400 [2024-04-24 19:52:03.684428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.400 [2024-04-24 19:52:03.684679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.400 [2024-04-24 19:52:03.684932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.400 [2024-04-24 19:52:03.684958] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.400 [2024-04-24 19:52:03.684974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.400 [2024-04-24 19:52:03.688538] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.400 [2024-04-24 19:52:03.697357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.400 [2024-04-24 19:52:03.697831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.698073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.698116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.400 [2024-04-24 19:52:03.698132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.400 [2024-04-24 19:52:03.698393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.400 [2024-04-24 19:52:03.698592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.400 [2024-04-24 19:52:03.698613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.400 [2024-04-24 19:52:03.698626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.400 [2024-04-24 19:52:03.702099] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.400 [2024-04-24 19:52:03.711097] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.400 [2024-04-24 19:52:03.711649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.711906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.711933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.400 [2024-04-24 19:52:03.711954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.400 [2024-04-24 19:52:03.712209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.400 [2024-04-24 19:52:03.712423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.400 [2024-04-24 19:52:03.712461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.400 [2024-04-24 19:52:03.712477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.400 [2024-04-24 19:52:03.715924] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.400 [2024-04-24 19:52:03.724937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.400 [2024-04-24 19:52:03.725424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.725646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.725685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.400 [2024-04-24 19:52:03.725704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.400 [2024-04-24 19:52:03.725941] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.400 [2024-04-24 19:52:03.726185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.400 [2024-04-24 19:52:03.726210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.400 [2024-04-24 19:52:03.726226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.400 [2024-04-24 19:52:03.729813] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.400 [2024-04-24 19:52:03.738864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.400 [2024-04-24 19:52:03.739388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.739606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.739640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.400 [2024-04-24 19:52:03.739676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.400 [2024-04-24 19:52:03.739914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.400 [2024-04-24 19:52:03.740158] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.400 [2024-04-24 19:52:03.740183] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.400 [2024-04-24 19:52:03.740199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.400 [2024-04-24 19:52:03.743781] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.400 [2024-04-24 19:52:03.752830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.400 [2024-04-24 19:52:03.753295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.753496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.400 [2024-04-24 19:52:03.753525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.753543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.753807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.754054] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.401 [2024-04-24 19:52:03.754079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.401 [2024-04-24 19:52:03.754095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.401 [2024-04-24 19:52:03.757661] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.401 [2024-04-24 19:52:03.766683] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.401 [2024-04-24 19:52:03.767145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.767342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.767370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.767388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.767626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.767890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.401 [2024-04-24 19:52:03.767915] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.401 [2024-04-24 19:52:03.767932] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.401 [2024-04-24 19:52:03.771482] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.401 [2024-04-24 19:52:03.780517] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.401 [2024-04-24 19:52:03.780978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.781252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.781304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.781322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.781560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.781822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.401 [2024-04-24 19:52:03.781849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.401 [2024-04-24 19:52:03.781865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.401 [2024-04-24 19:52:03.785417] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.401 [2024-04-24 19:52:03.794346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.401 [2024-04-24 19:52:03.794835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.795067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.795097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.795115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.795354] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.795604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.401 [2024-04-24 19:52:03.795641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.401 [2024-04-24 19:52:03.795667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.401 [2024-04-24 19:52:03.799224] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.401 [2024-04-24 19:52:03.808255] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.401 [2024-04-24 19:52:03.808739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.808972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.809002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.809020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.809259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.809502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.401 [2024-04-24 19:52:03.809526] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.401 [2024-04-24 19:52:03.809541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.401 [2024-04-24 19:52:03.813115] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.401 [2024-04-24 19:52:03.822148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.401 [2024-04-24 19:52:03.822599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.822842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.822873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.822891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.823130] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.823373] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.401 [2024-04-24 19:52:03.823398] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.401 [2024-04-24 19:52:03.823414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.401 [2024-04-24 19:52:03.826986] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.401 [2024-04-24 19:52:03.836024] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.401 [2024-04-24 19:52:03.836465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.836698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.836728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.836747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.836985] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.837229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.401 [2024-04-24 19:52:03.837260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.401 [2024-04-24 19:52:03.837276] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.401 [2024-04-24 19:52:03.840851] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.401 [2024-04-24 19:52:03.849882] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.401 [2024-04-24 19:52:03.850356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.850592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.850643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.850667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.850925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.851168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.401 [2024-04-24 19:52:03.851193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.401 [2024-04-24 19:52:03.851209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.401 [2024-04-24 19:52:03.854820] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.401 [2024-04-24 19:52:03.863842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.401 [2024-04-24 19:52:03.864307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.864536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.864566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.864584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.864843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.865088] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.401 [2024-04-24 19:52:03.865113] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.401 [2024-04-24 19:52:03.865129] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.401 [2024-04-24 19:52:03.868703] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.401 [2024-04-24 19:52:03.877728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.401 [2024-04-24 19:52:03.878167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.878535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.401 [2024-04-24 19:52:03.878588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.401 [2024-04-24 19:52:03.878606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.401 [2024-04-24 19:52:03.878861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.401 [2024-04-24 19:52:03.879104] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.402 [2024-04-24 19:52:03.879130] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.402 [2024-04-24 19:52:03.879151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.402 [2024-04-24 19:52:03.882720] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.402 [2024-04-24 19:52:03.891740] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.402 [2024-04-24 19:52:03.892204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 19:52:03.892536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 19:52:03.892592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.402 [2024-04-24 19:52:03.892611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.402 [2024-04-24 19:52:03.892865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.402 [2024-04-24 19:52:03.893110] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.402 [2024-04-24 19:52:03.893135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.402 [2024-04-24 19:52:03.893151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.402 [2024-04-24 19:52:03.896723] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.402 [2024-04-24 19:52:03.905786] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.402 [2024-04-24 19:52:03.906252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 19:52:03.906512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.402 [2024-04-24 19:52:03.906541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.402 [2024-04-24 19:52:03.906559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.402 [2024-04-24 19:52:03.906807] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.402 [2024-04-24 19:52:03.907050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.402 [2024-04-24 19:52:03.907074] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.402 [2024-04-24 19:52:03.907089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.402 [2024-04-24 19:52:03.910665] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.662 [2024-04-24 19:52:03.919758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.662 [2024-04-24 19:52:03.920381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.920725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.920752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.662 [2024-04-24 19:52:03.920768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.662 [2024-04-24 19:52:03.921029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.662 [2024-04-24 19:52:03.921272] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.662 [2024-04-24 19:52:03.921296] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.662 [2024-04-24 19:52:03.921312] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.662 [2024-04-24 19:52:03.924830] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.662 [2024-04-24 19:52:03.933527] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.662 [2024-04-24 19:52:03.934086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.934320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.934348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.662 [2024-04-24 19:52:03.934366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.662 [2024-04-24 19:52:03.934604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.662 [2024-04-24 19:52:03.934835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.662 [2024-04-24 19:52:03.934856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.662 [2024-04-24 19:52:03.934869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.662 [2024-04-24 19:52:03.938397] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.662 [2024-04-24 19:52:03.947405] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.662 [2024-04-24 19:52:03.947879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.948147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.948175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.662 [2024-04-24 19:52:03.948193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.662 [2024-04-24 19:52:03.948430] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.662 [2024-04-24 19:52:03.948685] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.662 [2024-04-24 19:52:03.948706] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.662 [2024-04-24 19:52:03.948719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.662 [2024-04-24 19:52:03.952100] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.662 [2024-04-24 19:52:03.961233] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.662 [2024-04-24 19:52:03.961697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.961930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.961959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.662 [2024-04-24 19:52:03.961976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.662 [2024-04-24 19:52:03.962214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.662 [2024-04-24 19:52:03.962456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.662 [2024-04-24 19:52:03.962480] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.662 [2024-04-24 19:52:03.962497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.662 [2024-04-24 19:52:03.966062] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.662 [2024-04-24 19:52:03.975107] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.662 [2024-04-24 19:52:03.975712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.975945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.975973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.662 [2024-04-24 19:52:03.975991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.662 [2024-04-24 19:52:03.976229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.662 [2024-04-24 19:52:03.976471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.662 [2024-04-24 19:52:03.976495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.662 [2024-04-24 19:52:03.976511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.662 [2024-04-24 19:52:03.980080] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.662 [2024-04-24 19:52:03.989137] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.662 [2024-04-24 19:52:03.989604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.989846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:03.989875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.662 [2024-04-24 19:52:03.989893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.662 [2024-04-24 19:52:03.990131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.662 [2024-04-24 19:52:03.990373] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.662 [2024-04-24 19:52:03.990397] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.662 [2024-04-24 19:52:03.990412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.662 [2024-04-24 19:52:03.993983] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.662 [2024-04-24 19:52:04.003009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.662 [2024-04-24 19:52:04.003483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:04.003739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:04.003770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.662 [2024-04-24 19:52:04.003788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.662 [2024-04-24 19:52:04.004026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.662 [2024-04-24 19:52:04.004268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.662 [2024-04-24 19:52:04.004292] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.662 [2024-04-24 19:52:04.004308] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.662 [2024-04-24 19:52:04.007877] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.662 [2024-04-24 19:52:04.016903] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.662 [2024-04-24 19:52:04.017382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:04.017607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:04.017647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.662 [2024-04-24 19:52:04.017673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.662 [2024-04-24 19:52:04.017911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.662 [2024-04-24 19:52:04.018153] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.662 [2024-04-24 19:52:04.018177] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.662 [2024-04-24 19:52:04.018193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.662 [2024-04-24 19:52:04.021761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.662 [2024-04-24 19:52:04.030788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.662 [2024-04-24 19:52:04.031260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.662 [2024-04-24 19:52:04.031481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.031506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.031537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.031778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.663 [2024-04-24 19:52:04.032031] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.663 [2024-04-24 19:52:04.032056] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.663 [2024-04-24 19:52:04.032072] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.663 [2024-04-24 19:52:04.035620] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.663 [2024-04-24 19:52:04.044648] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.663 [2024-04-24 19:52:04.045083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.045397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.045460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.045478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.045735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.663 [2024-04-24 19:52:04.045978] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.663 [2024-04-24 19:52:04.046003] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.663 [2024-04-24 19:52:04.046018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.663 [2024-04-24 19:52:04.049567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.663 [2024-04-24 19:52:04.058597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.663 [2024-04-24 19:52:04.059068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.059305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.059359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.059379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.059616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.663 [2024-04-24 19:52:04.059878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.663 [2024-04-24 19:52:04.059903] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.663 [2024-04-24 19:52:04.059918] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.663 [2024-04-24 19:52:04.063467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.663 [2024-04-24 19:52:04.072492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.663 [2024-04-24 19:52:04.072971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.073371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.073428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.073445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.073700] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.663 [2024-04-24 19:52:04.073944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.663 [2024-04-24 19:52:04.073968] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.663 [2024-04-24 19:52:04.073984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.663 [2024-04-24 19:52:04.077537] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.663 [2024-04-24 19:52:04.086358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.663 [2024-04-24 19:52:04.086802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.086977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.087006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.087024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.087261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.663 [2024-04-24 19:52:04.087503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.663 [2024-04-24 19:52:04.087527] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.663 [2024-04-24 19:52:04.087542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.663 [2024-04-24 19:52:04.091109] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.663 [2024-04-24 19:52:04.100339] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.663 [2024-04-24 19:52:04.100799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.101071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.101096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.101118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.101358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.663 [2024-04-24 19:52:04.101600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.663 [2024-04-24 19:52:04.101624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.663 [2024-04-24 19:52:04.101659] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.663 [2024-04-24 19:52:04.105214] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.663 [2024-04-24 19:52:04.114234] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.663 [2024-04-24 19:52:04.114674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.114945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.114974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.114992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.115229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.663 [2024-04-24 19:52:04.115471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.663 [2024-04-24 19:52:04.115495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.663 [2024-04-24 19:52:04.115511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.663 [2024-04-24 19:52:04.119078] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.663 [2024-04-24 19:52:04.128098] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.663 [2024-04-24 19:52:04.128569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.128741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.128771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.128790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.129027] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.663 [2024-04-24 19:52:04.129269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.663 [2024-04-24 19:52:04.129293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.663 [2024-04-24 19:52:04.129309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.663 [2024-04-24 19:52:04.132875] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.663 [2024-04-24 19:52:04.142114] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.663 [2024-04-24 19:52:04.142557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.142763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.142794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.142813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.143057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.663 [2024-04-24 19:52:04.143299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.663 [2024-04-24 19:52:04.143324] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.663 [2024-04-24 19:52:04.143340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.663 [2024-04-24 19:52:04.146909] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.663 [2024-04-24 19:52:04.155935] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.663 [2024-04-24 19:52:04.156400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.156593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.663 [2024-04-24 19:52:04.156621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.663 [2024-04-24 19:52:04.156656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.663 [2024-04-24 19:52:04.156898] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.664 [2024-04-24 19:52:04.157140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.664 [2024-04-24 19:52:04.157164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.664 [2024-04-24 19:52:04.157180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.664 [2024-04-24 19:52:04.160746] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.664 [2024-04-24 19:52:04.169755] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.664 [2024-04-24 19:52:04.170225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.664 [2024-04-24 19:52:04.170564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.664 [2024-04-24 19:52:04.170588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.664 [2024-04-24 19:52:04.170604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.664 [2024-04-24 19:52:04.170859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.664 [2024-04-24 19:52:04.171103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.664 [2024-04-24 19:52:04.171128] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.664 [2024-04-24 19:52:04.171144] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.664 [2024-04-24 19:52:04.174710] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.924 [2024-04-24 19:52:04.183740] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.924 [2024-04-24 19:52:04.184204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.924 [2024-04-24 19:52:04.184547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.924 [2024-04-24 19:52:04.184598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.924 [2024-04-24 19:52:04.184615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.924 [2024-04-24 19:52:04.184867] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.924 [2024-04-24 19:52:04.185116] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.924 [2024-04-24 19:52:04.185141] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.924 [2024-04-24 19:52:04.185157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.924 [2024-04-24 19:52:04.188722] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.924 [2024-04-24 19:52:04.197737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.924 [2024-04-24 19:52:04.198194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.924 [2024-04-24 19:52:04.198448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.924 [2024-04-24 19:52:04.198476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.924 [2024-04-24 19:52:04.198494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.924 [2024-04-24 19:52:04.198750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.924 [2024-04-24 19:52:04.198986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.924 [2024-04-24 19:52:04.199020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.924 [2024-04-24 19:52:04.199033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.924 [2024-04-24 19:52:04.202344] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.924 [2024-04-24 19:52:04.211680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.924 [2024-04-24 19:52:04.212145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.924 [2024-04-24 19:52:04.212497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.924 [2024-04-24 19:52:04.212546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.924 [2024-04-24 19:52:04.212564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.924 [2024-04-24 19:52:04.212812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.924 [2024-04-24 19:52:04.213055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.924 [2024-04-24 19:52:04.213079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.924 [2024-04-24 19:52:04.213095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.924 [2024-04-24 19:52:04.216665] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.924 [2024-04-24 19:52:04.225690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.924 [2024-04-24 19:52:04.226163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.924 [2024-04-24 19:52:04.226461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.924 [2024-04-24 19:52:04.226512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.924 [2024-04-24 19:52:04.226530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.924 [2024-04-24 19:52:04.226787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.924 [2024-04-24 19:52:04.227031] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.924 [2024-04-24 19:52:04.227060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.924 [2024-04-24 19:52:04.227077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.924 [2024-04-24 19:52:04.230636] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.924 [2024-04-24 19:52:04.239684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.925 [2024-04-24 19:52:04.240123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.240478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.240527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.925 [2024-04-24 19:52:04.240545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.925 [2024-04-24 19:52:04.240801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.925 [2024-04-24 19:52:04.241045] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.925 [2024-04-24 19:52:04.241069] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.925 [2024-04-24 19:52:04.241085] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.925 [2024-04-24 19:52:04.244648] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.925 [2024-04-24 19:52:04.253689] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.925 [2024-04-24 19:52:04.254149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.254470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.254520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.925 [2024-04-24 19:52:04.254538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.925 [2024-04-24 19:52:04.254786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.925 [2024-04-24 19:52:04.255029] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.925 [2024-04-24 19:52:04.255054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.925 [2024-04-24 19:52:04.255070] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.925 [2024-04-24 19:52:04.258634] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.925 [2024-04-24 19:52:04.267663] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.925 [2024-04-24 19:52:04.268129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.268382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.268422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.925 [2024-04-24 19:52:04.268438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.925 [2024-04-24 19:52:04.268712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.925 [2024-04-24 19:52:04.268956] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.925 [2024-04-24 19:52:04.268980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.925 [2024-04-24 19:52:04.269005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.925 [2024-04-24 19:52:04.272553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.925 [2024-04-24 19:52:04.281600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.925 [2024-04-24 19:52:04.282071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.282485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.282543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.925 [2024-04-24 19:52:04.282561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.925 [2024-04-24 19:52:04.282809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.925 [2024-04-24 19:52:04.283052] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.925 [2024-04-24 19:52:04.283076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.925 [2024-04-24 19:52:04.283092] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.925 [2024-04-24 19:52:04.286666] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.925 [2024-04-24 19:52:04.295475] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.925 [2024-04-24 19:52:04.295962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.296180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.296209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.925 [2024-04-24 19:52:04.296227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.925 [2024-04-24 19:52:04.296465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.925 [2024-04-24 19:52:04.296726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.925 [2024-04-24 19:52:04.296752] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.925 [2024-04-24 19:52:04.296768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.925 [2024-04-24 19:52:04.300319] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.925 [2024-04-24 19:52:04.309385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.925 [2024-04-24 19:52:04.309856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.310094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.310123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.925 [2024-04-24 19:52:04.310141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.925 [2024-04-24 19:52:04.310379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.925 [2024-04-24 19:52:04.310620] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.925 [2024-04-24 19:52:04.310655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.925 [2024-04-24 19:52:04.310671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.925 [2024-04-24 19:52:04.314236] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.925 [2024-04-24 19:52:04.323271] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.925 [2024-04-24 19:52:04.323736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.323930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.925 [2024-04-24 19:52:04.323955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.925 [2024-04-24 19:52:04.323970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.925 [2024-04-24 19:52:04.324227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.925 [2024-04-24 19:52:04.324469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.925 [2024-04-24 19:52:04.324493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.925 [2024-04-24 19:52:04.324509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.925 [2024-04-24 19:52:04.328074] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.926 [2024-04-24 19:52:04.337111] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.926 [2024-04-24 19:52:04.337626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.337870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.337898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.926 [2024-04-24 19:52:04.337926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.926 [2024-04-24 19:52:04.338163] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.926 [2024-04-24 19:52:04.338405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.926 [2024-04-24 19:52:04.338429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.926 [2024-04-24 19:52:04.338445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.926 [2024-04-24 19:52:04.342012] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.926 [2024-04-24 19:52:04.351059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.926 [2024-04-24 19:52:04.351496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.351670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.351697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.926 [2024-04-24 19:52:04.351714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.926 [2024-04-24 19:52:04.351968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.926 [2024-04-24 19:52:04.352210] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.926 [2024-04-24 19:52:04.352235] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.926 [2024-04-24 19:52:04.352250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.926 [2024-04-24 19:52:04.355816] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.926 [2024-04-24 19:52:04.365087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.926 [2024-04-24 19:52:04.365552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.365790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.365821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.926 [2024-04-24 19:52:04.365840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.926 [2024-04-24 19:52:04.366078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.926 [2024-04-24 19:52:04.366320] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.926 [2024-04-24 19:52:04.366344] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.926 [2024-04-24 19:52:04.366360] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.926 [2024-04-24 19:52:04.369927] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.926 [2024-04-24 19:52:04.378992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.926 [2024-04-24 19:52:04.379464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.379670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.379698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.926 [2024-04-24 19:52:04.379715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.926 [2024-04-24 19:52:04.379982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.926 [2024-04-24 19:52:04.380225] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.926 [2024-04-24 19:52:04.380250] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.926 [2024-04-24 19:52:04.380265] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.926 [2024-04-24 19:52:04.383832] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.926 [2024-04-24 19:52:04.392870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.926 [2024-04-24 19:52:04.393513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.393749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.393779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.926 [2024-04-24 19:52:04.393797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.926 [2024-04-24 19:52:04.394034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.926 [2024-04-24 19:52:04.394276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.926 [2024-04-24 19:52:04.394300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.926 [2024-04-24 19:52:04.394315] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.926 [2024-04-24 19:52:04.397886] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.926 [2024-04-24 19:52:04.406734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.926 [2024-04-24 19:52:04.407322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.407698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.407729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.926 [2024-04-24 19:52:04.407747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.926 [2024-04-24 19:52:04.407985] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.926 [2024-04-24 19:52:04.408228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.926 [2024-04-24 19:52:04.408252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.926 [2024-04-24 19:52:04.408267] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.926 [2024-04-24 19:52:04.411842] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.926 [2024-04-24 19:52:04.420526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.926 [2024-04-24 19:52:04.421009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.421295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.926 [2024-04-24 19:52:04.421324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.927 [2024-04-24 19:52:04.421342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.927 [2024-04-24 19:52:04.421580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.927 [2024-04-24 19:52:04.421829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.927 [2024-04-24 19:52:04.421852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.927 [2024-04-24 19:52:04.421866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:22.927 [2024-04-24 19:52:04.425452] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:22.927 [2024-04-24 19:52:04.434487] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:22.927 [2024-04-24 19:52:04.434974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.927 [2024-04-24 19:52:04.435215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:22.927 [2024-04-24 19:52:04.435241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:22.927 [2024-04-24 19:52:04.435258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:22.927 [2024-04-24 19:52:04.435511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:22.927 [2024-04-24 19:52:04.435775] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:22.927 [2024-04-24 19:52:04.435799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:22.927 [2024-04-24 19:52:04.435813] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.185 [2024-04-24 19:52:04.439435] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.185 [2024-04-24 19:52:04.448330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.185 [2024-04-24 19:52:04.448852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.449079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.449133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.185 [2024-04-24 19:52:04.449152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.185 [2024-04-24 19:52:04.449390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.185 [2024-04-24 19:52:04.449604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.185 [2024-04-24 19:52:04.449646] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.185 [2024-04-24 19:52:04.449660] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.185 [2024-04-24 19:52:04.453096] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.185 [2024-04-24 19:52:04.462231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.185 [2024-04-24 19:52:04.462700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.462940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.462969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.185 [2024-04-24 19:52:04.462987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.185 [2024-04-24 19:52:04.463224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.185 [2024-04-24 19:52:04.463466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.185 [2024-04-24 19:52:04.463490] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.185 [2024-04-24 19:52:04.463506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.185 [2024-04-24 19:52:04.467074] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.185 [2024-04-24 19:52:04.476110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.185 [2024-04-24 19:52:04.476546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.476804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.476835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.185 [2024-04-24 19:52:04.476853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.185 [2024-04-24 19:52:04.477091] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.185 [2024-04-24 19:52:04.477334] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.185 [2024-04-24 19:52:04.477358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.185 [2024-04-24 19:52:04.477373] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.185 [2024-04-24 19:52:04.480946] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.185 [2024-04-24 19:52:04.490045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.185 [2024-04-24 19:52:04.490508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.490690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.490721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.185 [2024-04-24 19:52:04.490744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.185 [2024-04-24 19:52:04.490983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.185 [2024-04-24 19:52:04.491226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.185 [2024-04-24 19:52:04.491250] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.185 [2024-04-24 19:52:04.491265] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.185 [2024-04-24 19:52:04.494835] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.185 [2024-04-24 19:52:04.503862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.185 [2024-04-24 19:52:04.504323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.504524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.504552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.185 [2024-04-24 19:52:04.504570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.185 [2024-04-24 19:52:04.504825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.185 [2024-04-24 19:52:04.505069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.185 [2024-04-24 19:52:04.505093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.185 [2024-04-24 19:52:04.505109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.185 [2024-04-24 19:52:04.508678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.185 [2024-04-24 19:52:04.517701] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.185 [2024-04-24 19:52:04.518172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.518472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.185 [2024-04-24 19:52:04.518523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.185 [2024-04-24 19:52:04.518541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.185 [2024-04-24 19:52:04.518794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.185 [2024-04-24 19:52:04.519040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.185 [2024-04-24 19:52:04.519064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.185 [2024-04-24 19:52:04.519079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.185 [2024-04-24 19:52:04.522634] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.185 [2024-04-24 19:52:04.531655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.185 [2024-04-24 19:52:04.532091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.532485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.532554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.186 [2024-04-24 19:52:04.532572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.186 [2024-04-24 19:52:04.532834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.186 [2024-04-24 19:52:04.533078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.186 [2024-04-24 19:52:04.533102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.186 [2024-04-24 19:52:04.533117] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.186 [2024-04-24 19:52:04.536681] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.186 [2024-04-24 19:52:04.545479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.186 [2024-04-24 19:52:04.545929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.546125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.546151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.186 [2024-04-24 19:52:04.546167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.186 [2024-04-24 19:52:04.546418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.186 [2024-04-24 19:52:04.546681] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.186 [2024-04-24 19:52:04.546708] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.186 [2024-04-24 19:52:04.546724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.186 [2024-04-24 19:52:04.550273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.186 [2024-04-24 19:52:04.559292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.186 [2024-04-24 19:52:04.559769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.559989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.560015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.186 [2024-04-24 19:52:04.560031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.186 [2024-04-24 19:52:04.560284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.186 [2024-04-24 19:52:04.560527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.186 [2024-04-24 19:52:04.560551] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.186 [2024-04-24 19:52:04.560566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.186 [2024-04-24 19:52:04.564131] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.186 [2024-04-24 19:52:04.573152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.186 [2024-04-24 19:52:04.573750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.573974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.574041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.186 [2024-04-24 19:52:04.574060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.186 [2024-04-24 19:52:04.574297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.186 [2024-04-24 19:52:04.574545] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.186 [2024-04-24 19:52:04.574569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.186 [2024-04-24 19:52:04.574584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.186 [2024-04-24 19:52:04.578155] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.186 [2024-04-24 19:52:04.586991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.186 [2024-04-24 19:52:04.587426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.587652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.587695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.186 [2024-04-24 19:52:04.587712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.186 [2024-04-24 19:52:04.587967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.186 [2024-04-24 19:52:04.588210] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.186 [2024-04-24 19:52:04.588234] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.186 [2024-04-24 19:52:04.588249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.186 [2024-04-24 19:52:04.591815] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.186 [2024-04-24 19:52:04.600848] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.186 [2024-04-24 19:52:04.601273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.601676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.601709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.186 [2024-04-24 19:52:04.601727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.186 [2024-04-24 19:52:04.601966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.186 [2024-04-24 19:52:04.602209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.186 [2024-04-24 19:52:04.602233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.186 [2024-04-24 19:52:04.602248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.186 [2024-04-24 19:52:04.605824] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.186 [2024-04-24 19:52:04.614849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.186 [2024-04-24 19:52:04.615308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.615643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.615673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.186 [2024-04-24 19:52:04.615689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.186 [2024-04-24 19:52:04.615936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.186 [2024-04-24 19:52:04.616179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.186 [2024-04-24 19:52:04.616209] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.186 [2024-04-24 19:52:04.616225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.186 [2024-04-24 19:52:04.619790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.186 [2024-04-24 19:52:04.628833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.186 [2024-04-24 19:52:04.629295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.629525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.629553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.186 [2024-04-24 19:52:04.629571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.186 [2024-04-24 19:52:04.629826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.186 [2024-04-24 19:52:04.630071] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.186 [2024-04-24 19:52:04.630095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.186 [2024-04-24 19:52:04.630110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.186 [2024-04-24 19:52:04.633675] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.186 [2024-04-24 19:52:04.642699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.186 [2024-04-24 19:52:04.643170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.643534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.643585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.186 [2024-04-24 19:52:04.643603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.186 [2024-04-24 19:52:04.643852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.186 [2024-04-24 19:52:04.644095] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.186 [2024-04-24 19:52:04.644120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.186 [2024-04-24 19:52:04.644136] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.186 [2024-04-24 19:52:04.647708] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.186 [2024-04-24 19:52:04.656544] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.186 [2024-04-24 19:52:04.656967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.186 [2024-04-24 19:52:04.657172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.187 [2024-04-24 19:52:04.657202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.187 [2024-04-24 19:52:04.657220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.187 [2024-04-24 19:52:04.657458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.187 [2024-04-24 19:52:04.657712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.187 [2024-04-24 19:52:04.657738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.187 [2024-04-24 19:52:04.657760] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.187 [2024-04-24 19:52:04.661331] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.187 [2024-04-24 19:52:04.670359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.187 [2024-04-24 19:52:04.670820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.187 [2024-04-24 19:52:04.671050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.187 [2024-04-24 19:52:04.671080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.187 [2024-04-24 19:52:04.671098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.187 [2024-04-24 19:52:04.671336] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.187 [2024-04-24 19:52:04.671578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.187 [2024-04-24 19:52:04.671603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.187 [2024-04-24 19:52:04.671619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.187 [2024-04-24 19:52:04.675203] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.187 [2024-04-24 19:52:04.684280] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.187 [2024-04-24 19:52:04.684737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.187 [2024-04-24 19:52:04.684940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.187 [2024-04-24 19:52:04.684969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.187 [2024-04-24 19:52:04.684987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.187 [2024-04-24 19:52:04.685224] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.187 [2024-04-24 19:52:04.685467] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.187 [2024-04-24 19:52:04.685491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.187 [2024-04-24 19:52:04.685506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.187 [2024-04-24 19:52:04.689076] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.187 [2024-04-24 19:52:04.698122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.445 [2024-04-24 19:52:04.698579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.698791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.698820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.445 [2024-04-24 19:52:04.698838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.445 [2024-04-24 19:52:04.699076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.445 [2024-04-24 19:52:04.699318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.445 [2024-04-24 19:52:04.699343] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.445 [2024-04-24 19:52:04.699359] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.445 [2024-04-24 19:52:04.702955] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.445 [2024-04-24 19:52:04.712134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.445 [2024-04-24 19:52:04.712573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.712783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.712811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.445 [2024-04-24 19:52:04.712827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.445 [2024-04-24 19:52:04.713046] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.445 [2024-04-24 19:52:04.713246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.445 [2024-04-24 19:52:04.713267] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.445 [2024-04-24 19:52:04.713280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.445 [2024-04-24 19:52:04.716849] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.445 [2024-04-24 19:52:04.726007] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.445 [2024-04-24 19:52:04.726553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.726799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.726826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.445 [2024-04-24 19:52:04.726843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.445 [2024-04-24 19:52:04.727089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.445 [2024-04-24 19:52:04.727333] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.445 [2024-04-24 19:52:04.727358] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.445 [2024-04-24 19:52:04.727375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.445 [2024-04-24 19:52:04.731001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.445 [2024-04-24 19:52:04.739913] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.445 [2024-04-24 19:52:04.740369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.740592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.740618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.445 [2024-04-24 19:52:04.740647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.445 [2024-04-24 19:52:04.740869] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.445 [2024-04-24 19:52:04.741135] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.445 [2024-04-24 19:52:04.741160] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.445 [2024-04-24 19:52:04.741177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.445 [2024-04-24 19:52:04.744778] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.445 [2024-04-24 19:52:04.753936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.445 [2024-04-24 19:52:04.754435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.754713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.754741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.445 [2024-04-24 19:52:04.754757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.445 [2024-04-24 19:52:04.755001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.445 [2024-04-24 19:52:04.755244] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.445 [2024-04-24 19:52:04.755268] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.445 [2024-04-24 19:52:04.755284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.445 [2024-04-24 19:52:04.758875] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.445 [2024-04-24 19:52:04.767818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.445 [2024-04-24 19:52:04.768304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.768663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.768709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.445 [2024-04-24 19:52:04.768726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.445 [2024-04-24 19:52:04.768957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.445 [2024-04-24 19:52:04.769200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.445 [2024-04-24 19:52:04.769224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.445 [2024-04-24 19:52:04.769239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.445 [2024-04-24 19:52:04.772816] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.445 [2024-04-24 19:52:04.781665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.445 [2024-04-24 19:52:04.782132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.782375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.782403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.445 [2024-04-24 19:52:04.782421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.445 [2024-04-24 19:52:04.782668] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.445 [2024-04-24 19:52:04.782911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.445 [2024-04-24 19:52:04.782936] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.445 [2024-04-24 19:52:04.782953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.445 [2024-04-24 19:52:04.786519] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.445 [2024-04-24 19:52:04.795566] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.445 [2024-04-24 19:52:04.796048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.796278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.445 [2024-04-24 19:52:04.796307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.796325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.796561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.796814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.796839] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.446 [2024-04-24 19:52:04.796854] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.446 [2024-04-24 19:52:04.800412] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.446 [2024-04-24 19:52:04.809465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.446 [2024-04-24 19:52:04.809888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.810185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.810210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.810241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.810488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.810740] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.810765] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.446 [2024-04-24 19:52:04.810781] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.446 [2024-04-24 19:52:04.814416] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.446 [2024-04-24 19:52:04.823464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.446 [2024-04-24 19:52:04.823936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.824230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.824279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.824298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.824536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.824791] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.824816] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.446 [2024-04-24 19:52:04.824831] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.446 [2024-04-24 19:52:04.828390] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.446 [2024-04-24 19:52:04.837451] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.446 [2024-04-24 19:52:04.837936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.838176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.838230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.838250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.838488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.838741] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.838766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.446 [2024-04-24 19:52:04.838782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.446 [2024-04-24 19:52:04.842350] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.446 [2024-04-24 19:52:04.851393] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.446 [2024-04-24 19:52:04.851879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.852226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.852277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.852295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.852533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.852786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.852813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.446 [2024-04-24 19:52:04.852829] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.446 [2024-04-24 19:52:04.856389] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.446 [2024-04-24 19:52:04.865229] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.446 [2024-04-24 19:52:04.865773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.866007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.866033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.866049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.866300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.866544] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.866569] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.446 [2024-04-24 19:52:04.866584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.446 [2024-04-24 19:52:04.870157] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.446 [2024-04-24 19:52:04.879229] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.446 [2024-04-24 19:52:04.879696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.879992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.880039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.880064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.880303] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.880546] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.880571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.446 [2024-04-24 19:52:04.880587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.446 [2024-04-24 19:52:04.884158] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.446 [2024-04-24 19:52:04.893190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.446 [2024-04-24 19:52:04.893641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.893848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.893877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.893896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.894134] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.894377] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.894402] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.446 [2024-04-24 19:52:04.894417] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.446 [2024-04-24 19:52:04.897986] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.446 [2024-04-24 19:52:04.907021] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.446 [2024-04-24 19:52:04.907465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.907671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.907702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.907720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.907958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.908199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.908223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.446 [2024-04-24 19:52:04.908239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.446 [2024-04-24 19:52:04.911813] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.446 [2024-04-24 19:52:04.920856] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.446 [2024-04-24 19:52:04.921331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.921528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.446 [2024-04-24 19:52:04.921556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.446 [2024-04-24 19:52:04.921574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.446 [2024-04-24 19:52:04.921841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.446 [2024-04-24 19:52:04.922086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.446 [2024-04-24 19:52:04.922112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.447 [2024-04-24 19:52:04.922128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.447 [2024-04-24 19:52:04.925694] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.447 [2024-04-24 19:52:04.934713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.447 [2024-04-24 19:52:04.935187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.447 [2024-04-24 19:52:04.935445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.447 [2024-04-24 19:52:04.935492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.447 [2024-04-24 19:52:04.935510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.447 [2024-04-24 19:52:04.935767] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.447 [2024-04-24 19:52:04.936012] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.447 [2024-04-24 19:52:04.936037] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.447 [2024-04-24 19:52:04.936052] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.447 [2024-04-24 19:52:04.939606] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.447 [2024-04-24 19:52:04.948673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.447 [2024-04-24 19:52:04.949120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.447 [2024-04-24 19:52:04.949401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.447 [2024-04-24 19:52:04.949430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.447 [2024-04-24 19:52:04.949448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.447 [2024-04-24 19:52:04.949706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.447 [2024-04-24 19:52:04.949951] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.447 [2024-04-24 19:52:04.949976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.447 [2024-04-24 19:52:04.949992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.447 [2024-04-24 19:52:04.953439] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:04.962434] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:04.962871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:04.963068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:04.963098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:04.963116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:04.963355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:04.963605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:04.963641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:04.963660] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:04.967105] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:04.976359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:04.976833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:04.977143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:04.977195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:04.977213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:04.977452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:04.977721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:04.977748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:04.977764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:04.981316] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:04.990362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:04.990837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:04.991167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:04.991223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:04.991241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:04.991480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:04.991745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:04.991773] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:04.991789] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:04.995341] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.004399] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.004848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.005203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.005253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.005271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.005509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.005770] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.005802] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:05.005820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:05.009371] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.018406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.018880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.019247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.019305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.019323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.019561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.019824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.019852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:05.019868] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:05.023421] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.032258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.032705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.032945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.032974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.032993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.033231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.033476] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.033500] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:05.033516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:05.037089] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.046131] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.046710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.047115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.047164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.047182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.047420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.047682] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.047709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:05.047731] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:05.051284] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.060105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.060579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.060798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.060829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.060848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.061085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.061329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.061354] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:05.061370] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:05.064944] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.074009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.074458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.074671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.074703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.074722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.074960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.075204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.075229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:05.075245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:05.078826] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.087870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.088336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.088534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.088561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.088579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.088835] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.089078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.089104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:05.089120] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:05.092697] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.101716] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.102179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.102535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.102590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.102609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.102864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.103107] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.103132] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:05.103149] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:05.106717] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.115518] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.115968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.116229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.116258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.116276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.116514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.116776] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.116803] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.706 [2024-04-24 19:52:05.116819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.706 [2024-04-24 19:52:05.120374] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.706 [2024-04-24 19:52:05.129404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.706 [2024-04-24 19:52:05.129854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.130236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.706 [2024-04-24 19:52:05.130289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.706 [2024-04-24 19:52:05.130307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.706 [2024-04-24 19:52:05.130545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.706 [2024-04-24 19:52:05.130810] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.706 [2024-04-24 19:52:05.130838] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.707 [2024-04-24 19:52:05.130855] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.707 [2024-04-24 19:52:05.134404] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.707 [2024-04-24 19:52:05.143244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.707 [2024-04-24 19:52:05.143693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.143870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.143898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.707 [2024-04-24 19:52:05.143915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.707 [2024-04-24 19:52:05.144153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.707 [2024-04-24 19:52:05.144394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.707 [2024-04-24 19:52:05.144420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.707 [2024-04-24 19:52:05.144436] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.707 [2024-04-24 19:52:05.148005] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.707 [2024-04-24 19:52:05.157241] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.707 [2024-04-24 19:52:05.157693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.157902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.157929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.707 [2024-04-24 19:52:05.157948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.707 [2024-04-24 19:52:05.158185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.707 [2024-04-24 19:52:05.158428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.707 [2024-04-24 19:52:05.158453] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.707 [2024-04-24 19:52:05.158470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.707 [2024-04-24 19:52:05.162040] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.707 [2024-04-24 19:52:05.171073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.707 [2024-04-24 19:52:05.171537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.171725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.171756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.707 [2024-04-24 19:52:05.171774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.707 [2024-04-24 19:52:05.172013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.707 [2024-04-24 19:52:05.172255] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.707 [2024-04-24 19:52:05.172281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.707 [2024-04-24 19:52:05.172297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.707 [2024-04-24 19:52:05.175869] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.707 [2024-04-24 19:52:05.184904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.707 [2024-04-24 19:52:05.185350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.185584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.185614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.707 [2024-04-24 19:52:05.185645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.707 [2024-04-24 19:52:05.185894] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.707 [2024-04-24 19:52:05.186137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.707 [2024-04-24 19:52:05.186163] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.707 [2024-04-24 19:52:05.186179] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.707 [2024-04-24 19:52:05.189746] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.707 [2024-04-24 19:52:05.198797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.707 [2024-04-24 19:52:05.199271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.199594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.199657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.707 [2024-04-24 19:52:05.199681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.707 [2024-04-24 19:52:05.199919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.707 [2024-04-24 19:52:05.200162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.707 [2024-04-24 19:52:05.200188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.707 [2024-04-24 19:52:05.200204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.707 [2024-04-24 19:52:05.203777] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.707 [2024-04-24 19:52:05.212646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.707 [2024-04-24 19:52:05.213117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.213372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.707 [2024-04-24 19:52:05.213436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.707 [2024-04-24 19:52:05.213454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.707 [2024-04-24 19:52:05.213706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.707 [2024-04-24 19:52:05.213948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.707 [2024-04-24 19:52:05.213974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.707 [2024-04-24 19:52:05.213990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.707 [2024-04-24 19:52:05.217553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.966 [2024-04-24 19:52:05.226589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.966 [2024-04-24 19:52:05.227064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.966 [2024-04-24 19:52:05.227312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.966 [2024-04-24 19:52:05.227343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.966 [2024-04-24 19:52:05.227361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.966 [2024-04-24 19:52:05.227599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.966 [2024-04-24 19:52:05.227861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.966 [2024-04-24 19:52:05.227889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.966 [2024-04-24 19:52:05.227905] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.966 [2024-04-24 19:52:05.231460] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.966 [2024-04-24 19:52:05.240499] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.966 [2024-04-24 19:52:05.240949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.966 [2024-04-24 19:52:05.241247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.966 [2024-04-24 19:52:05.241294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.966 [2024-04-24 19:52:05.241313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.966 [2024-04-24 19:52:05.241551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.966 [2024-04-24 19:52:05.241815] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.966 [2024-04-24 19:52:05.241843] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.966 [2024-04-24 19:52:05.241859] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.966 [2024-04-24 19:52:05.245410] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.966 [2024-04-24 19:52:05.254467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.966 [2024-04-24 19:52:05.254943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.966 [2024-04-24 19:52:05.255156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.966 [2024-04-24 19:52:05.255202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.966 [2024-04-24 19:52:05.255220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.967 [2024-04-24 19:52:05.255458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.967 [2024-04-24 19:52:05.255721] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.967 [2024-04-24 19:52:05.255749] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.967 [2024-04-24 19:52:05.255765] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.967 [2024-04-24 19:52:05.259316] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.967 [2024-04-24 19:52:05.268385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.967 [2024-04-24 19:52:05.268837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.269097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.269144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.967 [2024-04-24 19:52:05.269168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.967 [2024-04-24 19:52:05.269407] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.967 [2024-04-24 19:52:05.269669] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.967 [2024-04-24 19:52:05.269696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.967 [2024-04-24 19:52:05.269713] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.967 [2024-04-24 19:52:05.273264] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.967 [2024-04-24 19:52:05.282297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.967 [2024-04-24 19:52:05.282773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.283048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.283078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.967 [2024-04-24 19:52:05.283096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.967 [2024-04-24 19:52:05.283334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.967 [2024-04-24 19:52:05.283577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.967 [2024-04-24 19:52:05.283602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.967 [2024-04-24 19:52:05.283617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.967 [2024-04-24 19:52:05.287186] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.967 [2024-04-24 19:52:05.296225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.967 [2024-04-24 19:52:05.296768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.296952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.296981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.967 [2024-04-24 19:52:05.296998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.967 [2024-04-24 19:52:05.297236] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.967 [2024-04-24 19:52:05.297479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.967 [2024-04-24 19:52:05.297505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.967 [2024-04-24 19:52:05.297520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.967 [2024-04-24 19:52:05.301093] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.967 [2024-04-24 19:52:05.310118] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.967 [2024-04-24 19:52:05.310594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.310781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.310812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.967 [2024-04-24 19:52:05.310830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.967 [2024-04-24 19:52:05.311074] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.967 [2024-04-24 19:52:05.311318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.967 [2024-04-24 19:52:05.311343] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.967 [2024-04-24 19:52:05.311358] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.967 [2024-04-24 19:52:05.314929] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.967 [2024-04-24 19:52:05.323981] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.967 [2024-04-24 19:52:05.324446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.324662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.324692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.967 [2024-04-24 19:52:05.324709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.967 [2024-04-24 19:52:05.324947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.967 [2024-04-24 19:52:05.325190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.967 [2024-04-24 19:52:05.325216] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.967 [2024-04-24 19:52:05.325231] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.967 [2024-04-24 19:52:05.328805] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.967 [2024-04-24 19:52:05.337842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.967 [2024-04-24 19:52:05.338282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.338697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.338729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.967 [2024-04-24 19:52:05.338747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.967 [2024-04-24 19:52:05.338986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.967 [2024-04-24 19:52:05.339231] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.967 [2024-04-24 19:52:05.339256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.967 [2024-04-24 19:52:05.339272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.967 [2024-04-24 19:52:05.342845] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.967 [2024-04-24 19:52:05.351668] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.967 [2024-04-24 19:52:05.352143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.352382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.352428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.967 [2024-04-24 19:52:05.352447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.967 [2024-04-24 19:52:05.352705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.967 [2024-04-24 19:52:05.352957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.967 [2024-04-24 19:52:05.352982] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.967 [2024-04-24 19:52:05.352999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.967 [2024-04-24 19:52:05.356552] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.967 [2024-04-24 19:52:05.365585] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.967 [2024-04-24 19:52:05.366055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.366318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.967 [2024-04-24 19:52:05.366367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.967 [2024-04-24 19:52:05.366386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.968 [2024-04-24 19:52:05.366624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.968 [2024-04-24 19:52:05.366888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.968 [2024-04-24 19:52:05.366914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.968 [2024-04-24 19:52:05.366930] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.968 [2024-04-24 19:52:05.370481] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.968 [2024-04-24 19:52:05.379546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.968 [2024-04-24 19:52:05.380011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.380183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.380212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.968 [2024-04-24 19:52:05.380231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.968 [2024-04-24 19:52:05.380468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.968 [2024-04-24 19:52:05.380722] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.968 [2024-04-24 19:52:05.380747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.968 [2024-04-24 19:52:05.380762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.968 [2024-04-24 19:52:05.384317] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.968 [2024-04-24 19:52:05.393560] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.968 [2024-04-24 19:52:05.394127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.394486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.394535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.968 [2024-04-24 19:52:05.394553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.968 [2024-04-24 19:52:05.394800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.968 [2024-04-24 19:52:05.395042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.968 [2024-04-24 19:52:05.395072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.968 [2024-04-24 19:52:05.395088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.968 [2024-04-24 19:52:05.398667] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.968 [2024-04-24 19:52:05.407487] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.968 [2024-04-24 19:52:05.407933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.408205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.408252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.968 [2024-04-24 19:52:05.408270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.968 [2024-04-24 19:52:05.408508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.968 [2024-04-24 19:52:05.408760] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.968 [2024-04-24 19:52:05.408785] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.968 [2024-04-24 19:52:05.408801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.968 [2024-04-24 19:52:05.412364] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.968 [2024-04-24 19:52:05.421403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.968 [2024-04-24 19:52:05.421876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.422099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.422146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.968 [2024-04-24 19:52:05.422164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.968 [2024-04-24 19:52:05.422403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.968 [2024-04-24 19:52:05.422657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.968 [2024-04-24 19:52:05.422692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.968 [2024-04-24 19:52:05.422707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.968 [2024-04-24 19:52:05.426267] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.968 [2024-04-24 19:52:05.435311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.968 [2024-04-24 19:52:05.435776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.436017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.436064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.968 [2024-04-24 19:52:05.436083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.968 [2024-04-24 19:52:05.436320] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.968 [2024-04-24 19:52:05.436562] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.968 [2024-04-24 19:52:05.436586] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.968 [2024-04-24 19:52:05.436608] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.968 [2024-04-24 19:52:05.440180] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.968 [2024-04-24 19:52:05.449235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.968 [2024-04-24 19:52:05.449709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.449937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.449985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.968 [2024-04-24 19:52:05.450004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.968 [2024-04-24 19:52:05.450242] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.968 [2024-04-24 19:52:05.450485] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.968 [2024-04-24 19:52:05.450509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.968 [2024-04-24 19:52:05.450524] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.968 [2024-04-24 19:52:05.454090] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.968 [2024-04-24 19:52:05.463141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.968 [2024-04-24 19:52:05.463610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.463828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.463857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.968 [2024-04-24 19:52:05.463888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.968 [2024-04-24 19:52:05.464125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.968 [2024-04-24 19:52:05.464367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.968 [2024-04-24 19:52:05.464392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.968 [2024-04-24 19:52:05.464408] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:23.968 [2024-04-24 19:52:05.467978] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:23.968 [2024-04-24 19:52:05.477018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:23.968 [2024-04-24 19:52:05.477441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.477655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:23.968 [2024-04-24 19:52:05.477691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:23.968 [2024-04-24 19:52:05.477711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:23.968 [2024-04-24 19:52:05.477949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:23.968 [2024-04-24 19:52:05.478192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:23.968 [2024-04-24 19:52:05.478216] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:23.968 [2024-04-24 19:52:05.478232] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.229 [2024-04-24 19:52:05.481815] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.229 [2024-04-24 19:52:05.490855] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.229 [2024-04-24 19:52:05.491319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.491561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.491589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.229 [2024-04-24 19:52:05.491608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.229 [2024-04-24 19:52:05.491854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.229 [2024-04-24 19:52:05.492098] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.229 [2024-04-24 19:52:05.492122] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.229 [2024-04-24 19:52:05.492138] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.229 [2024-04-24 19:52:05.495703] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.229 [2024-04-24 19:52:05.504737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.229 [2024-04-24 19:52:05.505204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.505385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.505413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.229 [2024-04-24 19:52:05.505430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.229 [2024-04-24 19:52:05.505687] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.229 [2024-04-24 19:52:05.505932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.229 [2024-04-24 19:52:05.505958] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.229 [2024-04-24 19:52:05.505973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.229 [2024-04-24 19:52:05.509521] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.229 [2024-04-24 19:52:05.518737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.229 [2024-04-24 19:52:05.519202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.519409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.519439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.229 [2024-04-24 19:52:05.519457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.229 [2024-04-24 19:52:05.519705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.229 [2024-04-24 19:52:05.519948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.229 [2024-04-24 19:52:05.519974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.229 [2024-04-24 19:52:05.519990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.229 [2024-04-24 19:52:05.523551] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.229 [2024-04-24 19:52:05.532601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.229 [2024-04-24 19:52:05.533208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.533661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.533720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.229 [2024-04-24 19:52:05.533739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.229 [2024-04-24 19:52:05.533976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.229 [2024-04-24 19:52:05.534217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.229 [2024-04-24 19:52:05.534242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.229 [2024-04-24 19:52:05.534258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.229 [2024-04-24 19:52:05.537838] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.229 [2024-04-24 19:52:05.546459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.229 [2024-04-24 19:52:05.546928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.547334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.547386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.229 [2024-04-24 19:52:05.547403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.229 [2024-04-24 19:52:05.547650] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.229 [2024-04-24 19:52:05.547898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.229 [2024-04-24 19:52:05.547923] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.229 [2024-04-24 19:52:05.547939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.229 [2024-04-24 19:52:05.551511] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.229 [2024-04-24 19:52:05.560353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.229 [2024-04-24 19:52:05.560813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.561046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.561076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.229 [2024-04-24 19:52:05.561094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.229 [2024-04-24 19:52:05.561333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.229 [2024-04-24 19:52:05.561577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.229 [2024-04-24 19:52:05.561602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.229 [2024-04-24 19:52:05.561617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.229 [2024-04-24 19:52:05.565198] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.229 [2024-04-24 19:52:05.574230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.229 [2024-04-24 19:52:05.574701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.574913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.574941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.229 [2024-04-24 19:52:05.574959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.229 [2024-04-24 19:52:05.575198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.229 [2024-04-24 19:52:05.575441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.229 [2024-04-24 19:52:05.575466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.229 [2024-04-24 19:52:05.575481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.229 [2024-04-24 19:52:05.579057] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.229 [2024-04-24 19:52:05.588093] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.229 [2024-04-24 19:52:05.588553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.588773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.229 [2024-04-24 19:52:05.588803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.229 [2024-04-24 19:52:05.588821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.229 [2024-04-24 19:52:05.589059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.229 [2024-04-24 19:52:05.589303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.229 [2024-04-24 19:52:05.589328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.589344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.592916] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.601961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.602420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.602640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.602676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.230 [2024-04-24 19:52:05.602695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.230 [2024-04-24 19:52:05.602934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.230 [2024-04-24 19:52:05.603177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.230 [2024-04-24 19:52:05.603202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.603218] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.606789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.615821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.616284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.616683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.616715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.230 [2024-04-24 19:52:05.616733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.230 [2024-04-24 19:52:05.616972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.230 [2024-04-24 19:52:05.617215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.230 [2024-04-24 19:52:05.617240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.617256] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.620826] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.629647] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.630101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.630515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.630567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.230 [2024-04-24 19:52:05.630585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.230 [2024-04-24 19:52:05.630843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.230 [2024-04-24 19:52:05.631088] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.230 [2024-04-24 19:52:05.631113] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.631129] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.634700] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.643505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.643958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.644281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.644327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.230 [2024-04-24 19:52:05.644344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.230 [2024-04-24 19:52:05.644581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.230 [2024-04-24 19:52:05.644833] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.230 [2024-04-24 19:52:05.644858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.644873] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.648434] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.657481] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.657941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.658144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.658172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.230 [2024-04-24 19:52:05.658195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.230 [2024-04-24 19:52:05.658433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.230 [2024-04-24 19:52:05.658688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.230 [2024-04-24 19:52:05.658713] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.658729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.662289] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.671326] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.671775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.671979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.672006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.230 [2024-04-24 19:52:05.672024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.230 [2024-04-24 19:52:05.672261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.230 [2024-04-24 19:52:05.672504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.230 [2024-04-24 19:52:05.672530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.672546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.676116] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.685164] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.685610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.685837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.685865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.230 [2024-04-24 19:52:05.685895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.230 [2024-04-24 19:52:05.686132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.230 [2024-04-24 19:52:05.686375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.230 [2024-04-24 19:52:05.686400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.686416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.689988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.699032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.699530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.699766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.699795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.230 [2024-04-24 19:52:05.699813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.230 [2024-04-24 19:52:05.700057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.230 [2024-04-24 19:52:05.700300] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.230 [2024-04-24 19:52:05.700325] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.700340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.703923] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.712995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.713459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.713669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.230 [2024-04-24 19:52:05.713697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.230 [2024-04-24 19:52:05.713715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.230 [2024-04-24 19:52:05.713953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.230 [2024-04-24 19:52:05.714196] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.230 [2024-04-24 19:52:05.714222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.230 [2024-04-24 19:52:05.714238] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.230 [2024-04-24 19:52:05.717812] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.230 [2024-04-24 19:52:05.726853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.230 [2024-04-24 19:52:05.727315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.231 [2024-04-24 19:52:05.727500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.231 [2024-04-24 19:52:05.727528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.231 [2024-04-24 19:52:05.727545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.231 [2024-04-24 19:52:05.727794] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.231 [2024-04-24 19:52:05.728037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.231 [2024-04-24 19:52:05.728063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.231 [2024-04-24 19:52:05.728079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.231 [2024-04-24 19:52:05.731646] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.231 [2024-04-24 19:52:05.740701] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.231 [2024-04-24 19:52:05.741171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.231 [2024-04-24 19:52:05.741384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.231 [2024-04-24 19:52:05.741415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.231 [2024-04-24 19:52:05.741435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.231 [2024-04-24 19:52:05.741693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.491 [2024-04-24 19:52:05.741944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.491 [2024-04-24 19:52:05.741971] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.491 [2024-04-24 19:52:05.741989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.491 [2024-04-24 19:52:05.745559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.491 [2024-04-24 19:52:05.754602] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.491 [2024-04-24 19:52:05.755084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.755292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.755321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.491 [2024-04-24 19:52:05.755340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.491 [2024-04-24 19:52:05.755578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.491 [2024-04-24 19:52:05.755842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.491 [2024-04-24 19:52:05.755868] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.491 [2024-04-24 19:52:05.755884] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.491 [2024-04-24 19:52:05.759436] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.491 [2024-04-24 19:52:05.768474] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.491 [2024-04-24 19:52:05.768934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.769165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.769195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.491 [2024-04-24 19:52:05.769213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.491 [2024-04-24 19:52:05.769451] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.491 [2024-04-24 19:52:05.769714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.491 [2024-04-24 19:52:05.769741] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.491 [2024-04-24 19:52:05.769757] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.491 [2024-04-24 19:52:05.773323] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.491 [2024-04-24 19:52:05.782367] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.491 [2024-04-24 19:52:05.782852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.783041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.783069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.491 [2024-04-24 19:52:05.783087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.491 [2024-04-24 19:52:05.783326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.491 [2024-04-24 19:52:05.783569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.491 [2024-04-24 19:52:05.783599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.491 [2024-04-24 19:52:05.783616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.491 [2024-04-24 19:52:05.787188] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.491 [2024-04-24 19:52:05.796225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.491 [2024-04-24 19:52:05.796680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.796888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.796926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.491 [2024-04-24 19:52:05.796945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.491 [2024-04-24 19:52:05.797183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.491 [2024-04-24 19:52:05.797425] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.491 [2024-04-24 19:52:05.797450] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.491 [2024-04-24 19:52:05.797465] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.491 [2024-04-24 19:52:05.801031] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.491 [2024-04-24 19:52:05.810072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.491 [2024-04-24 19:52:05.810517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.810740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.491 [2024-04-24 19:52:05.810770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.491 [2024-04-24 19:52:05.810788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.491 [2024-04-24 19:52:05.811026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.491 [2024-04-24 19:52:05.811269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.491 [2024-04-24 19:52:05.811294] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.491 [2024-04-24 19:52:05.811310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.492 [2024-04-24 19:52:05.814878] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.492 [2024-04-24 19:52:05.823936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.492 [2024-04-24 19:52:05.824348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.824580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.824608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.492 [2024-04-24 19:52:05.824635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.492 [2024-04-24 19:52:05.824876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.492 [2024-04-24 19:52:05.825118] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.492 [2024-04-24 19:52:05.825142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.492 [2024-04-24 19:52:05.825163] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.492 [2024-04-24 19:52:05.828740] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.492 [2024-04-24 19:52:05.837947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.492 [2024-04-24 19:52:05.838424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.838640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.838670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.492 [2024-04-24 19:52:05.838689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.492 [2024-04-24 19:52:05.838927] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.492 [2024-04-24 19:52:05.839169] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.492 [2024-04-24 19:52:05.839193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.492 [2024-04-24 19:52:05.839209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.492 [2024-04-24 19:52:05.842780] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.492 [2024-04-24 19:52:05.851825] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.492 [2024-04-24 19:52:05.852263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.852478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.852516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.492 [2024-04-24 19:52:05.852534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.492 [2024-04-24 19:52:05.852791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.492 [2024-04-24 19:52:05.853035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.492 [2024-04-24 19:52:05.853060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.492 [2024-04-24 19:52:05.853086] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.492 [2024-04-24 19:52:05.856670] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.492 [2024-04-24 19:52:05.865718] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.492 [2024-04-24 19:52:05.866192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.866421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.866449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.492 [2024-04-24 19:52:05.866467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.492 [2024-04-24 19:52:05.866715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.492 [2024-04-24 19:52:05.866959] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.492 [2024-04-24 19:52:05.866983] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.492 [2024-04-24 19:52:05.866999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.492 [2024-04-24 19:52:05.870563] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.492 [2024-04-24 19:52:05.879624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.492 [2024-04-24 19:52:05.880068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.880289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.880330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.492 [2024-04-24 19:52:05.880348] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.492 [2024-04-24 19:52:05.880585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.492 [2024-04-24 19:52:05.880844] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.492 [2024-04-24 19:52:05.880870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.492 [2024-04-24 19:52:05.880886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.492 [2024-04-24 19:52:05.884439] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.492 [2024-04-24 19:52:05.893490] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.492 [2024-04-24 19:52:05.893941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.894125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.894153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.492 [2024-04-24 19:52:05.894171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.492 [2024-04-24 19:52:05.894410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.492 [2024-04-24 19:52:05.894672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.492 [2024-04-24 19:52:05.894698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.492 [2024-04-24 19:52:05.894713] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.492 [2024-04-24 19:52:05.898275] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.492 [2024-04-24 19:52:05.907337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.492 [2024-04-24 19:52:05.907774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.907985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.908014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.492 [2024-04-24 19:52:05.908032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.492 [2024-04-24 19:52:05.908270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.492 [2024-04-24 19:52:05.908512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.492 [2024-04-24 19:52:05.908536] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.492 [2024-04-24 19:52:05.908552] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.492 [2024-04-24 19:52:05.912116] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.492 [2024-04-24 19:52:05.921176] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.492 [2024-04-24 19:52:05.921651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.921855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.921884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.492 [2024-04-24 19:52:05.921902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.492 [2024-04-24 19:52:05.922140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.492 [2024-04-24 19:52:05.922382] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.492 [2024-04-24 19:52:05.922407] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.492 [2024-04-24 19:52:05.922422] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.492 [2024-04-24 19:52:05.925992] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.492 [2024-04-24 19:52:05.935050] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.492 [2024-04-24 19:52:05.935488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.935721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.492 [2024-04-24 19:52:05.935752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.492 [2024-04-24 19:52:05.935770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.492 [2024-04-24 19:52:05.936008] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.492 [2024-04-24 19:52:05.936251] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.492 [2024-04-24 19:52:05.936275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.492 [2024-04-24 19:52:05.936291] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.493 [2024-04-24 19:52:05.939862] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.493 [2024-04-24 19:52:05.948916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.493 [2024-04-24 19:52:05.949353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:05.949531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:05.949561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.493 [2024-04-24 19:52:05.949579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.493 [2024-04-24 19:52:05.949827] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.493 [2024-04-24 19:52:05.950070] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.493 [2024-04-24 19:52:05.950094] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.493 [2024-04-24 19:52:05.950109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.493 [2024-04-24 19:52:05.953691] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.493 [2024-04-24 19:52:05.962749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.493 [2024-04-24 19:52:05.963208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:05.963400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:05.963426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.493 [2024-04-24 19:52:05.963442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.493 [2024-04-24 19:52:05.963689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.493 [2024-04-24 19:52:05.963895] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.493 [2024-04-24 19:52:05.963915] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.493 [2024-04-24 19:52:05.963929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.493 [2024-04-24 19:52:05.966925] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.493 [2024-04-24 19:52:05.976120] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.493 [2024-04-24 19:52:05.976561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:05.976825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:05.976853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.493 [2024-04-24 19:52:05.976870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.493 [2024-04-24 19:52:05.977114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.493 [2024-04-24 19:52:05.977306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.493 [2024-04-24 19:52:05.977327] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.493 [2024-04-24 19:52:05.977340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.493 [2024-04-24 19:52:05.980330] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.493 [2024-04-24 19:52:05.989438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.493 [2024-04-24 19:52:05.989832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:05.990026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:05.990051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.493 [2024-04-24 19:52:05.990066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.493 [2024-04-24 19:52:05.990299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.493 [2024-04-24 19:52:05.990509] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.493 [2024-04-24 19:52:05.990529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.493 [2024-04-24 19:52:05.990542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.493 [2024-04-24 19:52:05.993490] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.493 [2024-04-24 19:52:06.003061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.493 [2024-04-24 19:52:06.003484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:06.003686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.493 [2024-04-24 19:52:06.003713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.493 [2024-04-24 19:52:06.003728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.752 [2024-04-24 19:52:06.003956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.752 [2024-04-24 19:52:06.004189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.752 [2024-04-24 19:52:06.004225] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.752 [2024-04-24 19:52:06.004240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.752 [2024-04-24 19:52:06.007197] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.752 [2024-04-24 19:52:06.016365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.752 [2024-04-24 19:52:06.016842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.017035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.017060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.752 [2024-04-24 19:52:06.017076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.752 [2024-04-24 19:52:06.017338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.752 [2024-04-24 19:52:06.017532] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.752 [2024-04-24 19:52:06.017553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.752 [2024-04-24 19:52:06.017566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.752 [2024-04-24 19:52:06.020509] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.752 [2024-04-24 19:52:06.029559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.752 [2024-04-24 19:52:06.030038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.030243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.030269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.752 [2024-04-24 19:52:06.030285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.752 [2024-04-24 19:52:06.030515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.752 [2024-04-24 19:52:06.030741] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.752 [2024-04-24 19:52:06.030764] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.752 [2024-04-24 19:52:06.030777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.752 [2024-04-24 19:52:06.033713] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.752 [2024-04-24 19:52:06.042892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.752 [2024-04-24 19:52:06.043388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.043579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.043606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.752 [2024-04-24 19:52:06.043637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.752 [2024-04-24 19:52:06.043882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.752 [2024-04-24 19:52:06.044091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.752 [2024-04-24 19:52:06.044113] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.752 [2024-04-24 19:52:06.044126] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.752 [2024-04-24 19:52:06.047061] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.752 [2024-04-24 19:52:06.056073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.752 [2024-04-24 19:52:06.056489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.056686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.056715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.752 [2024-04-24 19:52:06.056732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.752 [2024-04-24 19:52:06.056986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.752 [2024-04-24 19:52:06.057179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.752 [2024-04-24 19:52:06.057200] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.752 [2024-04-24 19:52:06.057214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.752 [2024-04-24 19:52:06.060155] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.752 [2024-04-24 19:52:06.069293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.752 [2024-04-24 19:52:06.069775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.069966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.069991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.752 [2024-04-24 19:52:06.070007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.752 [2024-04-24 19:52:06.070270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.752 [2024-04-24 19:52:06.070464] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.752 [2024-04-24 19:52:06.070484] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.752 [2024-04-24 19:52:06.070497] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.752 [2024-04-24 19:52:06.073443] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.752 [2024-04-24 19:52:06.082464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.752 [2024-04-24 19:52:06.082916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.083108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.752 [2024-04-24 19:52:06.083133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.752 [2024-04-24 19:52:06.083148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.752 [2024-04-24 19:52:06.083413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.752 [2024-04-24 19:52:06.083607] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.752 [2024-04-24 19:52:06.083656] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.752 [2024-04-24 19:52:06.083672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.752 [2024-04-24 19:52:06.086589] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.752 [2024-04-24 19:52:06.095786] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.752 [2024-04-24 19:52:06.096286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.096474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.096499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.096515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.753 [2024-04-24 19:52:06.096797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.753 [2024-04-24 19:52:06.097010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.753 [2024-04-24 19:52:06.097030] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.753 [2024-04-24 19:52:06.097043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.753 [2024-04-24 19:52:06.099981] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.753 [2024-04-24 19:52:06.108993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.753 [2024-04-24 19:52:06.109437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.109656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.109683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.109700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.753 [2024-04-24 19:52:06.109947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.753 [2024-04-24 19:52:06.110141] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.753 [2024-04-24 19:52:06.110162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.753 [2024-04-24 19:52:06.110174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.753 [2024-04-24 19:52:06.113115] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.753 [2024-04-24 19:52:06.122219] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.753 [2024-04-24 19:52:06.122717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.122880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.122905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.122921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.753 [2024-04-24 19:52:06.123185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.753 [2024-04-24 19:52:06.123379] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.753 [2024-04-24 19:52:06.123399] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.753 [2024-04-24 19:52:06.123412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.753 [2024-04-24 19:52:06.126392] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.753 [2024-04-24 19:52:06.135409] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.753 [2024-04-24 19:52:06.135854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.136073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.136100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.136117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.753 [2024-04-24 19:52:06.136384] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.753 [2024-04-24 19:52:06.136579] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.753 [2024-04-24 19:52:06.136600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.753 [2024-04-24 19:52:06.136636] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.753 [2024-04-24 19:52:06.139567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.753 [2024-04-24 19:52:06.148641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.753 [2024-04-24 19:52:06.149065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.149252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.149277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.149294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.753 [2024-04-24 19:52:06.149545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.753 [2024-04-24 19:52:06.149789] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.753 [2024-04-24 19:52:06.149812] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.753 [2024-04-24 19:52:06.149824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.753 [2024-04-24 19:52:06.152758] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.753 [2024-04-24 19:52:06.161750] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.753 [2024-04-24 19:52:06.162155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.162345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.162370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.162386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.753 [2024-04-24 19:52:06.162649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.753 [2024-04-24 19:52:06.162868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.753 [2024-04-24 19:52:06.162891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.753 [2024-04-24 19:52:06.162904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.753 [2024-04-24 19:52:06.165838] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.753 [2024-04-24 19:52:06.175028] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.753 [2024-04-24 19:52:06.175470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.175674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.175715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.175730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.753 [2024-04-24 19:52:06.175963] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.753 [2024-04-24 19:52:06.176173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.753 [2024-04-24 19:52:06.176194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.753 [2024-04-24 19:52:06.176207] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.753 [2024-04-24 19:52:06.179149] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.753 [2024-04-24 19:52:06.188337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.753 [2024-04-24 19:52:06.188730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.188952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.188979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.188995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.753 [2024-04-24 19:52:06.189264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.753 [2024-04-24 19:52:06.189458] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.753 [2024-04-24 19:52:06.189479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.753 [2024-04-24 19:52:06.189492] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.753 [2024-04-24 19:52:06.192438] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.753 [2024-04-24 19:52:06.201638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.753 [2024-04-24 19:52:06.202044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.202260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.202287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.202304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.753 [2024-04-24 19:52:06.202569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.753 [2024-04-24 19:52:06.202805] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.753 [2024-04-24 19:52:06.202829] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.753 [2024-04-24 19:52:06.202851] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.753 [2024-04-24 19:52:06.205979] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.753 [2024-04-24 19:52:06.215156] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.753 [2024-04-24 19:52:06.215639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.215807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.753 [2024-04-24 19:52:06.215834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.753 [2024-04-24 19:52:06.215850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.754 [2024-04-24 19:52:06.216087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.754 [2024-04-24 19:52:06.216280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.754 [2024-04-24 19:52:06.216301] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.754 [2024-04-24 19:52:06.216314] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.754 [2024-04-24 19:52:06.219435] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.754 [2024-04-24 19:52:06.228353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.754 [2024-04-24 19:52:06.228776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.754 [2024-04-24 19:52:06.228964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.754 [2024-04-24 19:52:06.228991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.754 [2024-04-24 19:52:06.229007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.754 [2024-04-24 19:52:06.229259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.754 [2024-04-24 19:52:06.229454] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.754 [2024-04-24 19:52:06.229474] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.754 [2024-04-24 19:52:06.229487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.754 [2024-04-24 19:52:06.232433] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.754 [2024-04-24 19:52:06.241613] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.754 [2024-04-24 19:52:06.242056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.754 [2024-04-24 19:52:06.242255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.754 [2024-04-24 19:52:06.242281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.754 [2024-04-24 19:52:06.242298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.754 [2024-04-24 19:52:06.242552] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.754 [2024-04-24 19:52:06.242778] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.754 [2024-04-24 19:52:06.242801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.754 [2024-04-24 19:52:06.242819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.754 [2024-04-24 19:52:06.245765] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.754 [2024-04-24 19:52:06.254961] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.754 [2024-04-24 19:52:06.255344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.754 [2024-04-24 19:52:06.255566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.754 [2024-04-24 19:52:06.255607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:24.754 [2024-04-24 19:52:06.255623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:24.754 [2024-04-24 19:52:06.255881] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:24.754 [2024-04-24 19:52:06.256094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.754 [2024-04-24 19:52:06.256115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.754 [2024-04-24 19:52:06.256128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.754 [2024-04-24 19:52:06.259067] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.013 [2024-04-24 19:52:06.268293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.013 [2024-04-24 19:52:06.268779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.268971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.268996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.013 [2024-04-24 19:52:06.269012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.013 [2024-04-24 19:52:06.269263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.013 [2024-04-24 19:52:06.269472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.013 [2024-04-24 19:52:06.269493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.013 [2024-04-24 19:52:06.269505] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.013 [2024-04-24 19:52:06.272613] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.013 [2024-04-24 19:52:06.281480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.013 [2024-04-24 19:52:06.281878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.282168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.282197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.013 [2024-04-24 19:52:06.282228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.013 [2024-04-24 19:52:06.282476] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.013 [2024-04-24 19:52:06.282702] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.013 [2024-04-24 19:52:06.282725] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.013 [2024-04-24 19:52:06.282739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.013 [2024-04-24 19:52:06.285685] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.013 [2024-04-24 19:52:06.294706] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.013 [2024-04-24 19:52:06.295217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.295379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.295404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.013 [2024-04-24 19:52:06.295420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.013 [2024-04-24 19:52:06.295684] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.013 [2024-04-24 19:52:06.295885] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.013 [2024-04-24 19:52:06.295906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.013 [2024-04-24 19:52:06.295921] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.013 [2024-04-24 19:52:06.298861] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.013 [2024-04-24 19:52:06.307871] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.013 [2024-04-24 19:52:06.308306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.308509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.308547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.013 [2024-04-24 19:52:06.308563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.013 [2024-04-24 19:52:06.308828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.013 [2024-04-24 19:52:06.309043] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.013 [2024-04-24 19:52:06.309065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.013 [2024-04-24 19:52:06.309077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.013 [2024-04-24 19:52:06.312012] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.013 [2024-04-24 19:52:06.321180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.013 [2024-04-24 19:52:06.321600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.321813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.013 [2024-04-24 19:52:06.321838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.013 [2024-04-24 19:52:06.321854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.013 [2024-04-24 19:52:06.322101] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.013 [2024-04-24 19:52:06.322295] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.013 [2024-04-24 19:52:06.322315] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.013 [2024-04-24 19:52:06.322328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.013 [2024-04-24 19:52:06.325271] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.013 [2024-04-24 19:52:06.334486] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.013 [2024-04-24 19:52:06.334881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.335099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.335123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.014 [2024-04-24 19:52:06.335139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.014 [2024-04-24 19:52:06.335373] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.014 [2024-04-24 19:52:06.335582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.014 [2024-04-24 19:52:06.335603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.014 [2024-04-24 19:52:06.335640] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.014 [2024-04-24 19:52:06.338565] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.014 [2024-04-24 19:52:06.347773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.014 [2024-04-24 19:52:06.348265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.348489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.348516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.014 [2024-04-24 19:52:06.348532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.014 [2024-04-24 19:52:06.348798] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.014 [2024-04-24 19:52:06.349013] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.014 [2024-04-24 19:52:06.349034] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.014 [2024-04-24 19:52:06.349046] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.014 [2024-04-24 19:52:06.351988] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.014 [2024-04-24 19:52:06.360955] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.014 [2024-04-24 19:52:06.361374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.361575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.361600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.014 [2024-04-24 19:52:06.361616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.014 [2024-04-24 19:52:06.361869] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.014 [2024-04-24 19:52:06.362081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.014 [2024-04-24 19:52:06.362102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.014 [2024-04-24 19:52:06.362115] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.014 [2024-04-24 19:52:06.365074] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.014 [2024-04-24 19:52:06.374243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.014 [2024-04-24 19:52:06.374724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.374918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.374943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.014 [2024-04-24 19:52:06.374959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.014 [2024-04-24 19:52:06.375223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.014 [2024-04-24 19:52:06.375417] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.014 [2024-04-24 19:52:06.375437] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.014 [2024-04-24 19:52:06.375450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.014 [2024-04-24 19:52:06.378398] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.014 [2024-04-24 19:52:06.387416] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.014 [2024-04-24 19:52:06.387889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.388080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.388105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.014 [2024-04-24 19:52:06.388121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.014 [2024-04-24 19:52:06.388377] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.014 [2024-04-24 19:52:06.388569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.014 [2024-04-24 19:52:06.388590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.014 [2024-04-24 19:52:06.388602] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.014 [2024-04-24 19:52:06.391546] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.014 [2024-04-24 19:52:06.400743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.014 [2024-04-24 19:52:06.401246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.401458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.401484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.014 [2024-04-24 19:52:06.401501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.014 [2024-04-24 19:52:06.401769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.014 [2024-04-24 19:52:06.401984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.014 [2024-04-24 19:52:06.402005] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.014 [2024-04-24 19:52:06.402018] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.014 [2024-04-24 19:52:06.404957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.014 [2024-04-24 19:52:06.413962] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.014 [2024-04-24 19:52:06.414379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.414564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.414591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.014 [2024-04-24 19:52:06.414612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.014 [2024-04-24 19:52:06.414862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.014 [2024-04-24 19:52:06.415074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.014 [2024-04-24 19:52:06.415095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.014 [2024-04-24 19:52:06.415107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.014 [2024-04-24 19:52:06.418046] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.014 [2024-04-24 19:52:06.427258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.014 [2024-04-24 19:52:06.427714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.427931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.427956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.014 [2024-04-24 19:52:06.427972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.014 [2024-04-24 19:52:06.428217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.014 [2024-04-24 19:52:06.428411] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.014 [2024-04-24 19:52:06.428432] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.014 [2024-04-24 19:52:06.428444] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.014 [2024-04-24 19:52:06.431404] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1775410 Killed "${NVMF_APP[@]}" "$@" 00:21:25.014 19:52:06 -- host/bdevperf.sh@36 -- # tgt_init 00:21:25.014 19:52:06 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:25.014 19:52:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:25.014 19:52:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:25.014 19:52:06 -- common/autotest_common.sh@10 -- # set +x 00:21:25.014 19:52:06 -- nvmf/common.sh@470 -- # nvmfpid=1776367 00:21:25.014 19:52:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:25.014 19:52:06 -- nvmf/common.sh@471 -- # waitforlisten 1776367 00:21:25.014 19:52:06 -- common/autotest_common.sh@817 -- # '[' -z 1776367 ']' 00:21:25.014 19:52:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.014 19:52:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:25.014 19:52:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.014 19:52:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:25.014 19:52:06 -- common/autotest_common.sh@10 -- # set +x 00:21:25.014 [2024-04-24 19:52:06.440807] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.014 [2024-04-24 19:52:06.441296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.441463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.014 [2024-04-24 19:52:06.441490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.014 [2024-04-24 19:52:06.441507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.015 [2024-04-24 19:52:06.441735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.015 [2024-04-24 19:52:06.441969] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.015 [2024-04-24 19:52:06.442005] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.015 [2024-04-24 19:52:06.442019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.015 [2024-04-24 19:52:06.445116] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.015 [2024-04-24 19:52:06.454159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.015 [2024-04-24 19:52:06.454573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.454762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.454789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.015 [2024-04-24 19:52:06.454805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.015 [2024-04-24 19:52:06.455045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.015 [2024-04-24 19:52:06.455272] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.015 [2024-04-24 19:52:06.455293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.015 [2024-04-24 19:52:06.455306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.015 [2024-04-24 19:52:06.458695] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.015 [2024-04-24 19:52:06.467506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.015 [2024-04-24 19:52:06.467895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.468135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.468160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.015 [2024-04-24 19:52:06.468177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.015 [2024-04-24 19:52:06.468444] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.015 [2024-04-24 19:52:06.468675] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.015 [2024-04-24 19:52:06.468700] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.015 [2024-04-24 19:52:06.468716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.015 [2024-04-24 19:52:06.471777] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.015 [2024-04-24 19:52:06.480839] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.015 [2024-04-24 19:52:06.481369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.481602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.481641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.015 [2024-04-24 19:52:06.481659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.015 [2024-04-24 19:52:06.481872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.015 [2024-04-24 19:52:06.482110] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.015 [2024-04-24 19:52:06.482129] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.015 [2024-04-24 19:52:06.482142] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.015 [2024-04-24 19:52:06.483794] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:25.015 [2024-04-24 19:52:06.483882] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.015 [2024-04-24 19:52:06.485188] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.015 [2024-04-24 19:52:06.494575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.015 [2024-04-24 19:52:06.495077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.495316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.495345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.015 [2024-04-24 19:52:06.495363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.015 [2024-04-24 19:52:06.495600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.015 [2024-04-24 19:52:06.495852] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.015 [2024-04-24 19:52:06.495876] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.015 [2024-04-24 19:52:06.495892] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.015 [2024-04-24 19:52:06.499455] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.015 [2024-04-24 19:52:06.508501] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.015 [2024-04-24 19:52:06.508951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.509258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.509297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.015 [2024-04-24 19:52:06.509313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.015 [2024-04-24 19:52:06.509533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.015 [2024-04-24 19:52:06.509787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.015 [2024-04-24 19:52:06.509812] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.015 [2024-04-24 19:52:06.509828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.015 [2024-04-24 19:52:06.513386] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.015 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.015 [2024-04-24 19:52:06.522439] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.015 [2024-04-24 19:52:06.522922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.523154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.015 [2024-04-24 19:52:06.523183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.015 [2024-04-24 19:52:06.523207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.015 [2024-04-24 19:52:06.523446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.015 [2024-04-24 19:52:06.523710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.015 [2024-04-24 19:52:06.523734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.015 [2024-04-24 19:52:06.523750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.275 [2024-04-24 19:52:06.527324] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.275 [2024-04-24 19:52:06.536394] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.275 [2024-04-24 19:52:06.536866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.275 [2024-04-24 19:52:06.537089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.275 [2024-04-24 19:52:06.537119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.275 [2024-04-24 19:52:06.537137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.275 [2024-04-24 19:52:06.537374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.275 [2024-04-24 19:52:06.537617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.275 [2024-04-24 19:52:06.537650] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.275 [2024-04-24 19:52:06.537668] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.275 [2024-04-24 19:52:06.541233] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.275 [2024-04-24 19:52:06.550283] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.275 [2024-04-24 19:52:06.550743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.275 [2024-04-24 19:52:06.550980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.275 [2024-04-24 19:52:06.551008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.275 [2024-04-24 19:52:06.551026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.275 [2024-04-24 19:52:06.551264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.275 [2024-04-24 19:52:06.551506] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.275 [2024-04-24 19:52:06.551530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.275 [2024-04-24 19:52:06.551546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.275 [2024-04-24 19:52:06.555317] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.275 [2024-04-24 19:52:06.556817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:25.275 [2024-04-24 19:52:06.564223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.275 [2024-04-24 19:52:06.564833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.275 [2024-04-24 19:52:06.565074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.275 [2024-04-24 19:52:06.565105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.275 [2024-04-24 19:52:06.565135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.275 [2024-04-24 19:52:06.565382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.275 [2024-04-24 19:52:06.565643] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.275 [2024-04-24 19:52:06.565669] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.275 [2024-04-24 19:52:06.565687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.275 [2024-04-24 19:52:06.569268] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.275 [2024-04-24 19:52:06.578138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.275 [2024-04-24 19:52:06.578716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.578976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.579006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.276 [2024-04-24 19:52:06.579026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.276 [2024-04-24 19:52:06.579269] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.276 [2024-04-24 19:52:06.579513] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.276 [2024-04-24 19:52:06.579537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.276 [2024-04-24 19:52:06.579554] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.276 [2024-04-24 19:52:06.583132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.276 [2024-04-24 19:52:06.591994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.276 [2024-04-24 19:52:06.592482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.592730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.592761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.276 [2024-04-24 19:52:06.592780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.276 [2024-04-24 19:52:06.593018] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.276 [2024-04-24 19:52:06.593260] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.276 [2024-04-24 19:52:06.593285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.276 [2024-04-24 19:52:06.593301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.276 [2024-04-24 19:52:06.596876] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.276 [2024-04-24 19:52:06.605925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.276 [2024-04-24 19:52:06.606419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.606664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.606696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.276 [2024-04-24 19:52:06.606715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.276 [2024-04-24 19:52:06.606964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.276 [2024-04-24 19:52:06.607207] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.276 [2024-04-24 19:52:06.607231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.276 [2024-04-24 19:52:06.607247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.276 [2024-04-24 19:52:06.610817] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.276 [2024-04-24 19:52:06.619860] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.276 [2024-04-24 19:52:06.620366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.620639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.620673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.276 [2024-04-24 19:52:06.620694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.276 [2024-04-24 19:52:06.620933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.276 [2024-04-24 19:52:06.621177] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.276 [2024-04-24 19:52:06.621201] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.276 [2024-04-24 19:52:06.621219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.276 [2024-04-24 19:52:06.624837] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.276 [2024-04-24 19:52:06.633919] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.276 [2024-04-24 19:52:06.634491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.634772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.634802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.276 [2024-04-24 19:52:06.634824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.276 [2024-04-24 19:52:06.635070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.276 [2024-04-24 19:52:06.635318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.276 [2024-04-24 19:52:06.635343] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.276 [2024-04-24 19:52:06.635361] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.276 [2024-04-24 19:52:06.638940] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.276 [2024-04-24 19:52:06.647800] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.276 [2024-04-24 19:52:06.648245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.648468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.648498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.276 [2024-04-24 19:52:06.648517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.276 [2024-04-24 19:52:06.648774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.276 [2024-04-24 19:52:06.649030] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.276 [2024-04-24 19:52:06.649054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.276 [2024-04-24 19:52:06.649070] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.276 [2024-04-24 19:52:06.652621] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.276 [2024-04-24 19:52:06.661682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.276 [2024-04-24 19:52:06.662154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.662368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.662398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.276 [2024-04-24 19:52:06.662417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.276 [2024-04-24 19:52:06.662675] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.276 [2024-04-24 19:52:06.662919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.276 [2024-04-24 19:52:06.662944] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.276 [2024-04-24 19:52:06.662961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.276 [2024-04-24 19:52:06.666512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.276 [2024-04-24 19:52:06.675565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.276 [2024-04-24 19:52:06.676065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.676315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.676343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.276 [2024-04-24 19:52:06.676362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.276 [2024-04-24 19:52:06.676600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.276 [2024-04-24 19:52:06.676863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.276 [2024-04-24 19:52:06.676888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.276 [2024-04-24 19:52:06.676904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.276 [2024-04-24 19:52:06.677947] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.276 [2024-04-24 19:52:06.677986] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.276 [2024-04-24 19:52:06.678011] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.276 [2024-04-24 19:52:06.678025] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.276 [2024-04-24 19:52:06.678037] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.276 [2024-04-24 19:52:06.678120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.276 [2024-04-24 19:52:06.678173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.276 [2024-04-24 19:52:06.678177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.276 [2024-04-24 19:52:06.680467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.276 [2024-04-24 19:52:06.689560] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.276 [2024-04-24 19:52:06.690283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.690545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.276 [2024-04-24 19:52:06.690578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.276 [2024-04-24 19:52:06.690602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.276 [2024-04-24 19:52:06.690884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.276 [2024-04-24 19:52:06.691138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.277 [2024-04-24 19:52:06.691165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.277 [2024-04-24 19:52:06.691185] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.277 [2024-04-24 19:52:06.694790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.277 [2024-04-24 19:52:06.703655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.277 [2024-04-24 19:52:06.704326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.704563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.704591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.277 [2024-04-24 19:52:06.704621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.277 [2024-04-24 19:52:06.704884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.277 [2024-04-24 19:52:06.705136] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.277 [2024-04-24 19:52:06.705163] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.277 [2024-04-24 19:52:06.705183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.277 [2024-04-24 19:52:06.708762] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.277 [2024-04-24 19:52:06.717615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.277 [2024-04-24 19:52:06.718195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.718561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.718592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.277 [2024-04-24 19:52:06.718615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.277 [2024-04-24 19:52:06.718882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.277 [2024-04-24 19:52:06.719135] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.277 [2024-04-24 19:52:06.719162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.277 [2024-04-24 19:52:06.719182] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.277 [2024-04-24 19:52:06.722761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.277 [2024-04-24 19:52:06.731608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.277 [2024-04-24 19:52:06.732300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.732567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.732600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.277 [2024-04-24 19:52:06.732624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.277 [2024-04-24 19:52:06.732896] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.277 [2024-04-24 19:52:06.733147] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.277 [2024-04-24 19:52:06.733173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.277 [2024-04-24 19:52:06.733194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.277 [2024-04-24 19:52:06.736781] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.277 [2024-04-24 19:52:06.745616] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.277 [2024-04-24 19:52:06.746189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.746416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.746446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.277 [2024-04-24 19:52:06.746468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.277 [2024-04-24 19:52:06.746734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.277 [2024-04-24 19:52:06.746982] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.277 [2024-04-24 19:52:06.747009] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.277 [2024-04-24 19:52:06.747028] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.277 [2024-04-24 19:52:06.750578] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.277 [2024-04-24 19:52:06.759659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.277 [2024-04-24 19:52:06.760305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.760537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.760567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.277 [2024-04-24 19:52:06.760591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.277 [2024-04-24 19:52:06.760850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.277 [2024-04-24 19:52:06.761101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.277 [2024-04-24 19:52:06.761127] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.277 [2024-04-24 19:52:06.761147] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.277 [2024-04-24 19:52:06.764723] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.277 [2024-04-24 19:52:06.773556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.277 [2024-04-24 19:52:06.774075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.774261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.774298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.277 [2024-04-24 19:52:06.774318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.277 [2024-04-24 19:52:06.774558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.277 [2024-04-24 19:52:06.774822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.277 [2024-04-24 19:52:06.774851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.277 [2024-04-24 19:52:06.774868] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.277 [2024-04-24 19:52:06.778423] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.277 [2024-04-24 19:52:06.787477] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.277 [2024-04-24 19:52:06.787962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.788164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.277 [2024-04-24 19:52:06.788194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.277 [2024-04-24 19:52:06.788213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.537 [2024-04-24 19:52:06.788451] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.537 [2024-04-24 19:52:06.788705] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.537 [2024-04-24 19:52:06.788731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.537 [2024-04-24 19:52:06.788747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.537 [2024-04-24 19:52:06.792307] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.537 [2024-04-24 19:52:06.801379] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.537 [2024-04-24 19:52:06.801860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.802077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.802107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.537 [2024-04-24 19:52:06.802125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.537 [2024-04-24 19:52:06.802364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.537 [2024-04-24 19:52:06.802608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.537 [2024-04-24 19:52:06.802646] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.537 [2024-04-24 19:52:06.802670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.537 [2024-04-24 19:52:06.806224] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.537 [2024-04-24 19:52:06.815264] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.537 [2024-04-24 19:52:06.815745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.815963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.815994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.537 [2024-04-24 19:52:06.816021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.537 [2024-04-24 19:52:06.816261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.537 [2024-04-24 19:52:06.816504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.537 [2024-04-24 19:52:06.816529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.537 [2024-04-24 19:52:06.816546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.537 [2024-04-24 19:52:06.820111] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.537 [2024-04-24 19:52:06.829153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.537 [2024-04-24 19:52:06.829579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.829828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.829859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.537 [2024-04-24 19:52:06.829877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.537 [2024-04-24 19:52:06.830115] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.537 [2024-04-24 19:52:06.830357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.537 [2024-04-24 19:52:06.830383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.537 [2024-04-24 19:52:06.830399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.537 [2024-04-24 19:52:06.833967] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.537 [2024-04-24 19:52:06.843003] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.537 [2024-04-24 19:52:06.843449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.843665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.843696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.537 [2024-04-24 19:52:06.843715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.537 [2024-04-24 19:52:06.843955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.537 [2024-04-24 19:52:06.844198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.537 [2024-04-24 19:52:06.844223] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.537 [2024-04-24 19:52:06.844240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.537 [2024-04-24 19:52:06.847804] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.537 [2024-04-24 19:52:06.856841] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.537 [2024-04-24 19:52:06.857289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.857495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.857525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.537 [2024-04-24 19:52:06.857543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.537 [2024-04-24 19:52:06.857799] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.537 [2024-04-24 19:52:06.858042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.537 [2024-04-24 19:52:06.858067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.537 [2024-04-24 19:52:06.858083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.537 [2024-04-24 19:52:06.861684] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.537 [2024-04-24 19:52:06.870719] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.537 [2024-04-24 19:52:06.871202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.871419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.871448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.537 [2024-04-24 19:52:06.871466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.537 [2024-04-24 19:52:06.871715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.537 [2024-04-24 19:52:06.871959] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.537 [2024-04-24 19:52:06.871984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.537 [2024-04-24 19:52:06.871999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.537 [2024-04-24 19:52:06.875558] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.537 [2024-04-24 19:52:06.884595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.537 [2024-04-24 19:52:06.885084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.885271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.885300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.537 [2024-04-24 19:52:06.885317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.537 [2024-04-24 19:52:06.885556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.537 [2024-04-24 19:52:06.885811] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.537 [2024-04-24 19:52:06.885837] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.537 [2024-04-24 19:52:06.885853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.537 [2024-04-24 19:52:06.889411] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.537 [2024-04-24 19:52:06.898445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.537 [2024-04-24 19:52:06.898898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.899119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.537 [2024-04-24 19:52:06.899149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:06.899167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:06.899404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:06.899663] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.538 [2024-04-24 19:52:06.899689] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.538 [2024-04-24 19:52:06.899706] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.538 [2024-04-24 19:52:06.903266] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.538 [2024-04-24 19:52:06.912316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.538 [2024-04-24 19:52:06.912768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.912967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.912995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:06.913013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:06.913252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:06.913496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.538 [2024-04-24 19:52:06.913522] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.538 [2024-04-24 19:52:06.913538] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.538 [2024-04-24 19:52:06.917103] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.538 [2024-04-24 19:52:06.926185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.538 [2024-04-24 19:52:06.926638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.926843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.926874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:06.926893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:06.927131] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:06.927374] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.538 [2024-04-24 19:52:06.927399] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.538 [2024-04-24 19:52:06.927416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.538 [2024-04-24 19:52:06.930982] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.538 [2024-04-24 19:52:06.940013] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.538 [2024-04-24 19:52:06.940474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.940683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.940712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:06.940730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:06.940968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:06.941212] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.538 [2024-04-24 19:52:06.941242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.538 [2024-04-24 19:52:06.941258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.538 [2024-04-24 19:52:06.944831] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.538 [2024-04-24 19:52:06.953870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.538 [2024-04-24 19:52:06.954337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.954543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.954572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:06.954590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:06.954837] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:06.955079] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.538 [2024-04-24 19:52:06.955103] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.538 [2024-04-24 19:52:06.955118] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.538 [2024-04-24 19:52:06.958688] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.538 [2024-04-24 19:52:06.967733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.538 [2024-04-24 19:52:06.968211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.968419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.968449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:06.968467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:06.968723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:06.968967] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.538 [2024-04-24 19:52:06.968992] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.538 [2024-04-24 19:52:06.969008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.538 [2024-04-24 19:52:06.972559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.538 [2024-04-24 19:52:06.981621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.538 [2024-04-24 19:52:06.982084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.982295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.982325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:06.982343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:06.982581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:06.982835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.538 [2024-04-24 19:52:06.982860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.538 [2024-04-24 19:52:06.982882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.538 [2024-04-24 19:52:06.986447] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.538 [2024-04-24 19:52:06.995500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.538 [2024-04-24 19:52:06.995927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.996127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:06.996156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:06.996174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:06.996412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:06.996668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.538 [2024-04-24 19:52:06.996694] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.538 [2024-04-24 19:52:06.996710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.538 [2024-04-24 19:52:07.000274] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.538 [2024-04-24 19:52:07.009357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.538 [2024-04-24 19:52:07.009805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:07.009980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:07.010009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:07.010027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:07.010264] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:07.010507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.538 [2024-04-24 19:52:07.010531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.538 [2024-04-24 19:52:07.010547] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.538 [2024-04-24 19:52:07.014120] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.538 [2024-04-24 19:52:07.023386] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.538 [2024-04-24 19:52:07.023834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:07.024023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.538 [2024-04-24 19:52:07.024054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.538 [2024-04-24 19:52:07.024073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.538 [2024-04-24 19:52:07.024311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.538 [2024-04-24 19:52:07.024554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.539 [2024-04-24 19:52:07.024578] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.539 [2024-04-24 19:52:07.024594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.539 [2024-04-24 19:52:07.028171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.539 [2024-04-24 19:52:07.037220] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.539 [2024-04-24 19:52:07.037682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.539 [2024-04-24 19:52:07.037856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.539 [2024-04-24 19:52:07.037885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.539 [2024-04-24 19:52:07.037904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.539 [2024-04-24 19:52:07.038142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.539 [2024-04-24 19:52:07.038385] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.539 [2024-04-24 19:52:07.038409] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.539 [2024-04-24 19:52:07.038424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.539 [2024-04-24 19:52:07.042001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.798 [2024-04-24 19:52:07.051059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.798 [2024-04-24 19:52:07.051495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.051705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.051735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.798 [2024-04-24 19:52:07.051753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.798 [2024-04-24 19:52:07.051991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.798 [2024-04-24 19:52:07.052233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.798 [2024-04-24 19:52:07.052258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.798 [2024-04-24 19:52:07.052273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.798 [2024-04-24 19:52:07.055848] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.798 [2024-04-24 19:52:07.064899] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.798 [2024-04-24 19:52:07.065363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.065566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.065595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.798 [2024-04-24 19:52:07.065613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.798 [2024-04-24 19:52:07.065858] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.798 [2024-04-24 19:52:07.066101] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.798 [2024-04-24 19:52:07.066126] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.798 [2024-04-24 19:52:07.066141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.798 [2024-04-24 19:52:07.069715] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.798 [2024-04-24 19:52:07.078771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.798 [2024-04-24 19:52:07.079229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.079411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.079440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.798 [2024-04-24 19:52:07.079458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.798 [2024-04-24 19:52:07.079706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.798 [2024-04-24 19:52:07.079948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.798 [2024-04-24 19:52:07.079973] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.798 [2024-04-24 19:52:07.079988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.798 [2024-04-24 19:52:07.083556] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.798 [2024-04-24 19:52:07.092598] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.798 [2024-04-24 19:52:07.093060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.093266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.093296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.798 [2024-04-24 19:52:07.093315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.798 [2024-04-24 19:52:07.093553] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.798 [2024-04-24 19:52:07.093808] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.798 [2024-04-24 19:52:07.093833] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.798 [2024-04-24 19:52:07.093848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.798 [2024-04-24 19:52:07.097412] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.798 [2024-04-24 19:52:07.106469] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.798 [2024-04-24 19:52:07.106948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.107152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.798 [2024-04-24 19:52:07.107181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.798 [2024-04-24 19:52:07.107199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.798 [2024-04-24 19:52:07.107437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.798 [2024-04-24 19:52:07.107691] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.799 [2024-04-24 19:52:07.107716] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.799 [2024-04-24 19:52:07.107732] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.799 [2024-04-24 19:52:07.111292] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.799 [2024-04-24 19:52:07.120338] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.799 [2024-04-24 19:52:07.120800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.121008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.121036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.799 [2024-04-24 19:52:07.121055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.799 [2024-04-24 19:52:07.121293] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.799 [2024-04-24 19:52:07.121537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.799 [2024-04-24 19:52:07.121563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.799 [2024-04-24 19:52:07.121579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.799 [2024-04-24 19:52:07.125147] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.799 [2024-04-24 19:52:07.134193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.799 [2024-04-24 19:52:07.134648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.134849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.134878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.799 [2024-04-24 19:52:07.134896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.799 [2024-04-24 19:52:07.135135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.799 [2024-04-24 19:52:07.135378] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.799 [2024-04-24 19:52:07.135404] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.799 [2024-04-24 19:52:07.135420] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.799 [2024-04-24 19:52:07.138991] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.799 [2024-04-24 19:52:07.148033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.799 [2024-04-24 19:52:07.148496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.148679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.148709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.799 [2024-04-24 19:52:07.148727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.799 [2024-04-24 19:52:07.148965] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.799 [2024-04-24 19:52:07.149208] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.799 [2024-04-24 19:52:07.149233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.799 [2024-04-24 19:52:07.149250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.799 [2024-04-24 19:52:07.152818] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.799 [2024-04-24 19:52:07.161862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.799 [2024-04-24 19:52:07.162299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.162502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.162534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.799 [2024-04-24 19:52:07.162553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.799 [2024-04-24 19:52:07.162800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.799 [2024-04-24 19:52:07.163043] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.799 [2024-04-24 19:52:07.163068] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.799 [2024-04-24 19:52:07.163083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.799 [2024-04-24 19:52:07.166652] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.799 [2024-04-24 19:52:07.175893] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.799 [2024-04-24 19:52:07.176334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.176519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.176549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.799 [2024-04-24 19:52:07.176567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.799 [2024-04-24 19:52:07.176814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.799 [2024-04-24 19:52:07.177058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.799 [2024-04-24 19:52:07.177083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.799 [2024-04-24 19:52:07.177099] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.799 [2024-04-24 19:52:07.180713] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.799 [2024-04-24 19:52:07.189762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.799 [2024-04-24 19:52:07.190228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.190455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.190484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.799 [2024-04-24 19:52:07.190502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.799 [2024-04-24 19:52:07.190750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.799 [2024-04-24 19:52:07.190993] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.799 [2024-04-24 19:52:07.191017] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.799 [2024-04-24 19:52:07.191033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.799 [2024-04-24 19:52:07.194592] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.799 [2024-04-24 19:52:07.203642] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.799 [2024-04-24 19:52:07.204103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.204300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.799 [2024-04-24 19:52:07.204329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.799 [2024-04-24 19:52:07.204353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.799 [2024-04-24 19:52:07.204591] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.799 [2024-04-24 19:52:07.204842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.799 [2024-04-24 19:52:07.204867] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.799 [2024-04-24 19:52:07.204884] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.799 [2024-04-24 19:52:07.208448] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.800 [2024-04-24 19:52:07.217495] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.800 [2024-04-24 19:52:07.217973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.218145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.218173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.800 [2024-04-24 19:52:07.218191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.800 [2024-04-24 19:52:07.218428] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.800 [2024-04-24 19:52:07.218684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.800 [2024-04-24 19:52:07.218709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.800 [2024-04-24 19:52:07.218724] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.800 [2024-04-24 19:52:07.222284] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.800 [2024-04-24 19:52:07.231323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.800 [2024-04-24 19:52:07.231758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.231986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.232014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.800 [2024-04-24 19:52:07.232032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.800 [2024-04-24 19:52:07.232271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.800 [2024-04-24 19:52:07.232514] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.800 [2024-04-24 19:52:07.232538] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.800 [2024-04-24 19:52:07.232554] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.800 [2024-04-24 19:52:07.236121] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.800 [2024-04-24 19:52:07.245160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.800 [2024-04-24 19:52:07.245587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.245834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.245864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.800 [2024-04-24 19:52:07.245882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.800 [2024-04-24 19:52:07.246126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.800 [2024-04-24 19:52:07.246368] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.800 [2024-04-24 19:52:07.246392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.800 [2024-04-24 19:52:07.246408] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.800 [2024-04-24 19:52:07.249974] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.800 [2024-04-24 19:52:07.259018] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.800 [2024-04-24 19:52:07.259426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.259659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.259689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.800 [2024-04-24 19:52:07.259707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.800 [2024-04-24 19:52:07.259945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.800 [2024-04-24 19:52:07.260188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.800 [2024-04-24 19:52:07.260213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.800 [2024-04-24 19:52:07.260230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.800 [2024-04-24 19:52:07.263805] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.800 [2024-04-24 19:52:07.272858] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.800 [2024-04-24 19:52:07.273295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.273495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.273525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.800 [2024-04-24 19:52:07.273543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.800 [2024-04-24 19:52:07.273791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.800 [2024-04-24 19:52:07.274035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.800 [2024-04-24 19:52:07.274060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.800 [2024-04-24 19:52:07.274075] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.800 [2024-04-24 19:52:07.277643] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.800 [2024-04-24 19:52:07.286696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.800 [2024-04-24 19:52:07.287136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.287337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.287366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.800 [2024-04-24 19:52:07.287385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.800 [2024-04-24 19:52:07.287623] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.800 [2024-04-24 19:52:07.287889] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.800 [2024-04-24 19:52:07.287915] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.800 [2024-04-24 19:52:07.287930] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.800 [2024-04-24 19:52:07.291478] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.800 [2024-04-24 19:52:07.300531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.800 [2024-04-24 19:52:07.301001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.301172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.800 [2024-04-24 19:52:07.301201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:25.800 [2024-04-24 19:52:07.301218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:25.800 [2024-04-24 19:52:07.301456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:25.800 [2024-04-24 19:52:07.301710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.800 [2024-04-24 19:52:07.301735] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.800 [2024-04-24 19:52:07.301751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.800 [2024-04-24 19:52:07.305312] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.060 [2024-04-24 19:52:07.314556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.060 [2024-04-24 19:52:07.315035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.315258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.315287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.060 [2024-04-24 19:52:07.315305] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.060 [2024-04-24 19:52:07.315542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.060 [2024-04-24 19:52:07.315806] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.060 [2024-04-24 19:52:07.315832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.060 [2024-04-24 19:52:07.315848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.060 [2024-04-24 19:52:07.319401] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.060 [2024-04-24 19:52:07.328451] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.060 [2024-04-24 19:52:07.328900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.329102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.329131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.060 [2024-04-24 19:52:07.329149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.060 [2024-04-24 19:52:07.329387] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.060 [2024-04-24 19:52:07.329641] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.060 [2024-04-24 19:52:07.329671] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.060 [2024-04-24 19:52:07.329687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.060 [2024-04-24 19:52:07.333245] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.060 [2024-04-24 19:52:07.342288] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.060 [2024-04-24 19:52:07.342754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.342969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.342998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.060 [2024-04-24 19:52:07.343017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.060 [2024-04-24 19:52:07.343255] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.060 [2024-04-24 19:52:07.343497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.060 [2024-04-24 19:52:07.343521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.060 [2024-04-24 19:52:07.343537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.060 [2024-04-24 19:52:07.347107] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.060 [2024-04-24 19:52:07.356150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.060 [2024-04-24 19:52:07.356622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.356844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.356873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.060 [2024-04-24 19:52:07.356891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.060 [2024-04-24 19:52:07.357129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.060 [2024-04-24 19:52:07.357372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.060 [2024-04-24 19:52:07.357396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.060 [2024-04-24 19:52:07.357412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.060 [2024-04-24 19:52:07.360980] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.060 [2024-04-24 19:52:07.370019] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.060 [2024-04-24 19:52:07.370472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.370670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.370700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.060 [2024-04-24 19:52:07.370718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.060 [2024-04-24 19:52:07.370955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.060 [2024-04-24 19:52:07.371198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.060 [2024-04-24 19:52:07.371222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.060 [2024-04-24 19:52:07.371244] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.060 [2024-04-24 19:52:07.374841] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.060 [2024-04-24 19:52:07.383894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.060 [2024-04-24 19:52:07.384341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.384567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.060 [2024-04-24 19:52:07.384596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.060 [2024-04-24 19:52:07.384614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.060 [2024-04-24 19:52:07.384859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.060 [2024-04-24 19:52:07.385102] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.060 [2024-04-24 19:52:07.385127] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.060 [2024-04-24 19:52:07.385143] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.060 [2024-04-24 19:52:07.388709] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.061 [2024-04-24 19:52:07.397743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.061 [2024-04-24 19:52:07.398147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.398314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.398342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.061 [2024-04-24 19:52:07.398361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.061 [2024-04-24 19:52:07.398598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.061 [2024-04-24 19:52:07.398850] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.061 [2024-04-24 19:52:07.398875] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.061 [2024-04-24 19:52:07.398890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.061 [2024-04-24 19:52:07.402453] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.061 [2024-04-24 19:52:07.411712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.061 [2024-04-24 19:52:07.412182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.412365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.412395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.061 [2024-04-24 19:52:07.412413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.061 [2024-04-24 19:52:07.412663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.061 [2024-04-24 19:52:07.412907] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.061 [2024-04-24 19:52:07.412931] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.061 [2024-04-24 19:52:07.412947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.061 [2024-04-24 19:52:07.416512] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.061 [2024-04-24 19:52:07.425554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.061 [2024-04-24 19:52:07.426007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.426240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.426269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.061 [2024-04-24 19:52:07.426287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.061 [2024-04-24 19:52:07.426525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.061 [2024-04-24 19:52:07.426779] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.061 [2024-04-24 19:52:07.426804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.061 [2024-04-24 19:52:07.426819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.061 [2024-04-24 19:52:07.430379] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.061 [2024-04-24 19:52:07.439418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.061 [2024-04-24 19:52:07.439881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.440057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.440086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.061 [2024-04-24 19:52:07.440104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.061 [2024-04-24 19:52:07.440341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.061 [2024-04-24 19:52:07.440584] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.061 [2024-04-24 19:52:07.440609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.061 [2024-04-24 19:52:07.440624] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.061 [2024-04-24 19:52:07.444197] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.061 [2024-04-24 19:52:07.452974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.061 [2024-04-24 19:52:07.453398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.453642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.453681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.061 [2024-04-24 19:52:07.453697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.061 [2024-04-24 19:52:07.453912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.061 [2024-04-24 19:52:07.454140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.061 [2024-04-24 19:52:07.454174] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.061 [2024-04-24 19:52:07.454188] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.061 [2024-04-24 19:52:07.457429] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.061 19:52:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:26.061 19:52:07 -- common/autotest_common.sh@850 -- # return 0 00:21:26.061 19:52:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:26.061 19:52:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:26.061 19:52:07 -- common/autotest_common.sh@10 -- # set +x 00:21:26.061 [2024-04-24 19:52:07.466636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.061 [2024-04-24 19:52:07.467093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.467278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.467304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.061 [2024-04-24 19:52:07.467321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.061 [2024-04-24 19:52:07.467535] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.061 [2024-04-24 19:52:07.467795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.061 [2024-04-24 19:52:07.467819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.061 [2024-04-24 19:52:07.467833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.061 [2024-04-24 19:52:07.471135] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.061 [2024-04-24 19:52:07.480195] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.061 [2024-04-24 19:52:07.480569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.480753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.480781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.061 [2024-04-24 19:52:07.480799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.061 [2024-04-24 19:52:07.481024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.061 [2024-04-24 19:52:07.481229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.061 [2024-04-24 19:52:07.481250] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.061 [2024-04-24 19:52:07.481263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.061 [2024-04-24 19:52:07.484423] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.061 19:52:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.061 19:52:07 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.061 19:52:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.061 19:52:07 -- common/autotest_common.sh@10 -- # set +x 00:21:26.061 [2024-04-24 19:52:07.490347] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.061 [2024-04-24 19:52:07.493682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.061 [2024-04-24 19:52:07.494091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.494272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.494298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.061 [2024-04-24 19:52:07.494314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.061 [2024-04-24 19:52:07.494528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.061 [2024-04-24 19:52:07.494795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.061 [2024-04-24 19:52:07.494818] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.061 [2024-04-24 19:52:07.494832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.061 19:52:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.061 19:52:07 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:26.061 19:52:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.061 19:52:07 -- common/autotest_common.sh@10 -- # set +x 00:21:26.061 [2024-04-24 19:52:07.498072] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.061 [2024-04-24 19:52:07.507061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.061 [2024-04-24 19:52:07.507488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.510642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.061 [2024-04-24 19:52:07.510682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.062 [2024-04-24 19:52:07.510699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.062 [2024-04-24 19:52:07.510935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.062 [2024-04-24 19:52:07.511154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.062 [2024-04-24 19:52:07.511175] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.062 [2024-04-24 19:52:07.511189] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.062 [2024-04-24 19:52:07.514338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.062 [2024-04-24 19:52:07.520563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.062 [2024-04-24 19:52:07.521139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.062 [2024-04-24 19:52:07.521370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.062 [2024-04-24 19:52:07.521398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.062 [2024-04-24 19:52:07.521418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.062 [2024-04-24 19:52:07.521678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.062 [2024-04-24 19:52:07.521901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.062 [2024-04-24 19:52:07.521937] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.062 [2024-04-24 19:52:07.521953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.062 [2024-04-24 19:52:07.525166] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.062 Malloc0 00:21:26.062 19:52:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.062 19:52:07 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:26.062 19:52:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.062 19:52:07 -- common/autotest_common.sh@10 -- # set +x 00:21:26.062 [2024-04-24 19:52:07.534342] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.062 [2024-04-24 19:52:07.534927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.062 [2024-04-24 19:52:07.535103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.062 [2024-04-24 19:52:07.535130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.062 [2024-04-24 19:52:07.535158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.062 [2024-04-24 19:52:07.535391] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.062 [2024-04-24 19:52:07.535624] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.062 [2024-04-24 19:52:07.535655] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.062 [2024-04-24 19:52:07.535672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.062 [2024-04-24 19:52:07.538878] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.062 19:52:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.062 19:52:07 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:26.062 19:52:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.062 19:52:07 -- common/autotest_common.sh@10 -- # set +x 00:21:26.062 [2024-04-24 19:52:07.547976] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.062 [2024-04-24 19:52:07.548415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.062 [2024-04-24 19:52:07.548615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:26.062 [2024-04-24 19:52:07.548649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1170 with addr=10.0.0.2, port=4420 00:21:26.062 [2024-04-24 19:52:07.548666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1170 is same with the state(5) to be set 00:21:26.062 [2024-04-24 19:52:07.548880] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1170 (9): Bad file descriptor 00:21:26.062 19:52:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.062 [2024-04-24 19:52:07.549113] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:26.062 [2024-04-24 19:52:07.549135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:26.062 [2024-04-24 19:52:07.549148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.062 19:52:07 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.062 19:52:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.062 19:52:07 -- common/autotest_common.sh@10 -- # set +x 00:21:26.062 [2024-04-24 19:52:07.552393] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:26.062 [2024-04-24 19:52:07.553009] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.062 19:52:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.062 19:52:07 -- host/bdevperf.sh@38 -- # wait 1775706 00:21:26.062 [2024-04-24 19:52:07.561387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.321 [2024-04-24 19:52:07.761522] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:36.342 00:21:36.342 Latency(us) 00:21:36.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.342 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:36.342 Verification LBA range: start 0x0 length 0x4000 00:21:36.342 Nvme1n1 : 15.01 6180.03 24.14 10698.04 0.00 7559.53 1183.29 18932.62 00:21:36.342 =================================================================================================================== 00:21:36.342 Total : 6180.03 24.14 10698.04 0.00 7559.53 1183.29 18932.62 00:21:36.342 19:52:16 -- host/bdevperf.sh@39 -- # sync 00:21:36.342 19:52:16 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.342 19:52:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.342 19:52:16 -- common/autotest_common.sh@10 -- # set +x 00:21:36.342 19:52:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.342 19:52:16 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:21:36.342 19:52:16 -- host/bdevperf.sh@44 -- # nvmftestfini 00:21:36.342 19:52:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:36.342 19:52:16 -- nvmf/common.sh@117 -- # sync 00:21:36.342 19:52:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.342 19:52:16 -- nvmf/common.sh@120 -- # set +e 00:21:36.342 19:52:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.342 19:52:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.342 rmmod nvme_tcp 00:21:36.342 rmmod nvme_fabrics 00:21:36.342 rmmod nvme_keyring 00:21:36.342 19:52:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.342 19:52:16 -- nvmf/common.sh@124 -- # set -e 00:21:36.342 19:52:16 -- nvmf/common.sh@125 -- # return 0 00:21:36.342 19:52:16 -- nvmf/common.sh@478 -- # '[' -n 1776367 ']' 00:21:36.342 19:52:16 -- nvmf/common.sh@479 -- # killprocess 1776367 00:21:36.342 19:52:16 -- common/autotest_common.sh@936 -- # '[' -z 1776367 ']' 00:21:36.342 19:52:16 -- common/autotest_common.sh@940 -- # kill -0 1776367 00:21:36.342 19:52:16 -- common/autotest_common.sh@941 -- # uname 00:21:36.342 19:52:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:36.342 19:52:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1776367 00:21:36.342 19:52:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:36.342 19:52:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:36.342 19:52:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1776367' 00:21:36.342 killing process with pid 1776367 00:21:36.342 19:52:16 -- common/autotest_common.sh@955 -- # kill 1776367 00:21:36.342 19:52:16 -- common/autotest_common.sh@960 -- # wait 1776367 00:21:36.342 19:52:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:36.342 19:52:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:36.342 19:52:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:36.342 19:52:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.342 19:52:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.342 19:52:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.342 19:52:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.342 19:52:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.278 19:52:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.278 00:21:37.278 real 0m22.600s 00:21:37.278 user 0m56.479s 00:21:37.278 sys 0m5.617s 00:21:37.278 19:52:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:37.278 19:52:18 -- common/autotest_common.sh@10 -- # set +x 00:21:37.278 ************************************ 00:21:37.278 END TEST nvmf_bdevperf 00:21:37.278 ************************************ 00:21:37.278 19:52:18 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:21:37.278 19:52:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:37.278 19:52:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:37.278 19:52:18 -- common/autotest_common.sh@10 -- # set +x 00:21:37.536 ************************************ 00:21:37.536 START TEST nvmf_target_disconnect 00:21:37.536 ************************************ 00:21:37.536 19:52:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:21:37.536 * Looking for test storage... 00:21:37.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:37.536 19:52:18 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.536 19:52:18 -- nvmf/common.sh@7 -- # uname -s 00:21:37.536 19:52:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.536 19:52:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.536 19:52:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.536 19:52:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.536 19:52:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.536 19:52:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.536 19:52:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.536 19:52:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.536 19:52:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.536 19:52:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.537 19:52:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.537 19:52:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.537 19:52:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.537 19:52:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.537 19:52:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.537 19:52:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.537 19:52:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.537 19:52:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.537 19:52:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.537 19:52:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.537 19:52:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.537 19:52:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.537 19:52:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.537 19:52:18 -- paths/export.sh@5 -- # export PATH 00:21:37.537 19:52:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.537 19:52:18 -- nvmf/common.sh@47 -- # : 0 00:21:37.537 19:52:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.537 19:52:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.537 19:52:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.537 19:52:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.537 19:52:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.537 19:52:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.537 19:52:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.537 19:52:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.537 19:52:18 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:37.537 19:52:18 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:21:37.537 19:52:18 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:21:37.537 19:52:18 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:21:37.537 19:52:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:37.537 19:52:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.537 19:52:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:37.537 19:52:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:37.537 19:52:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:37.537 19:52:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.537 19:52:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.537 19:52:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.537 19:52:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:37.537 19:52:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:37.537 19:52:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.537 19:52:18 -- common/autotest_common.sh@10 -- # set +x 00:21:39.449 19:52:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:39.449 19:52:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:39.449 19:52:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:39.449 19:52:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:39.449 19:52:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:39.449 19:52:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:39.449 19:52:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:39.449 19:52:20 -- nvmf/common.sh@295 -- # net_devs=() 00:21:39.449 19:52:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:39.449 19:52:20 -- nvmf/common.sh@296 -- # e810=() 00:21:39.449 19:52:20 -- nvmf/common.sh@296 -- # local -ga e810 00:21:39.449 19:52:20 -- nvmf/common.sh@297 -- # x722=() 00:21:39.449 19:52:20 -- nvmf/common.sh@297 -- # local -ga x722 00:21:39.449 19:52:20 -- nvmf/common.sh@298 -- # mlx=() 00:21:39.449 19:52:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:39.449 19:52:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.449 19:52:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:39.449 19:52:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:39.449 19:52:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:39.449 19:52:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.449 19:52:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:39.449 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:39.449 19:52:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:39.449 19:52:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:39.449 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:39.449 19:52:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:39.449 19:52:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.449 19:52:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.449 19:52:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:39.449 19:52:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.449 19:52:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:39.449 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:39.449 19:52:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.449 19:52:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:39.449 19:52:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.449 19:52:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:39.449 19:52:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.449 19:52:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:39.449 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:39.449 19:52:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.449 19:52:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:39.449 19:52:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:39.449 19:52:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:39.449 19:52:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:39.449 19:52:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.449 19:52:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.449 19:52:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.449 19:52:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:39.449 19:52:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.449 19:52:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.449 19:52:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:39.449 19:52:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.449 19:52:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.449 19:52:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:39.449 19:52:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:39.449 19:52:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.449 19:52:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.449 19:52:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.449 19:52:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.449 19:52:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:39.449 19:52:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.449 19:52:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.708 19:52:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.708 19:52:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:39.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:21:39.708 00:21:39.708 --- 10.0.0.2 ping statistics --- 00:21:39.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.708 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:21:39.708 19:52:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:21:39.708 00:21:39.708 --- 10.0.0.1 ping statistics --- 00:21:39.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.708 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:39.708 19:52:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.708 19:52:20 -- nvmf/common.sh@411 -- # return 0 00:21:39.708 19:52:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:39.708 19:52:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.708 19:52:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:39.708 19:52:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:39.708 19:52:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.708 19:52:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:39.708 19:52:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:39.708 19:52:21 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:21:39.708 19:52:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:39.708 19:52:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.708 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:39.708 ************************************ 00:21:39.708 START TEST nvmf_target_disconnect_tc1 00:21:39.708 ************************************ 00:21:39.708 19:52:21 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:21:39.708 19:52:21 -- host/target_disconnect.sh@32 -- # set +e 00:21:39.708 19:52:21 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:39.708 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.708 [2024-04-24 19:52:21.189141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.708 [2024-04-24 19:52:21.189480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.708 [2024-04-24 19:52:21.189528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22caad0 with addr=10.0.0.2, port=4420 00:21:39.708 [2024-04-24 19:52:21.189565] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:39.708 [2024-04-24 19:52:21.189595] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:39.708 [2024-04-24 19:52:21.189610] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:21:39.708 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:21:39.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:21:39.708 Initializing NVMe Controllers 00:21:39.708 19:52:21 -- host/target_disconnect.sh@33 -- # trap - ERR 00:21:39.708 19:52:21 -- host/target_disconnect.sh@33 -- # print_backtrace 00:21:39.708 19:52:21 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:21:39.708 19:52:21 -- common/autotest_common.sh@1139 -- # return 0 00:21:39.709 19:52:21 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:21:39.709 19:52:21 -- host/target_disconnect.sh@41 -- # set -e 00:21:39.709 00:21:39.709 real 0m0.097s 00:21:39.709 user 0m0.036s 00:21:39.709 sys 0m0.059s 00:21:39.709 19:52:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.709 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:39.709 ************************************ 00:21:39.709 END TEST nvmf_target_disconnect_tc1 00:21:39.709 ************************************ 00:21:39.966 19:52:21 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:21:39.966 19:52:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:39.966 19:52:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.966 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:39.966 ************************************ 00:21:39.966 START TEST nvmf_target_disconnect_tc2 00:21:39.966 ************************************ 00:21:39.966 19:52:21 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:21:39.966 19:52:21 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:21:39.966 19:52:21 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:39.966 19:52:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:39.966 19:52:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:39.966 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:39.966 19:52:21 -- nvmf/common.sh@470 -- # nvmfpid=1779542 00:21:39.966 19:52:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:39.966 19:52:21 -- nvmf/common.sh@471 -- # waitforlisten 1779542 00:21:39.966 19:52:21 -- common/autotest_common.sh@817 -- # '[' -z 1779542 ']' 00:21:39.966 19:52:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.966 19:52:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:39.966 19:52:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.966 19:52:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:39.966 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:39.966 [2024-04-24 19:52:21.363841] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:39.967 [2024-04-24 19:52:21.363920] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.967 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.967 [2024-04-24 19:52:21.429535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.224 [2024-04-24 19:52:21.543272] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.224 [2024-04-24 19:52:21.543327] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.224 [2024-04-24 19:52:21.543340] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.224 [2024-04-24 19:52:21.543356] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.224 [2024-04-24 19:52:21.543366] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.224 [2024-04-24 19:52:21.543452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:40.224 [2024-04-24 19:52:21.543512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:40.224 [2024-04-24 19:52:21.543579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:40.224 [2024-04-24 19:52:21.543582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:40.224 19:52:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:40.224 19:52:21 -- common/autotest_common.sh@850 -- # return 0 00:21:40.224 19:52:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:40.224 19:52:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:40.224 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:40.224 19:52:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.224 19:52:21 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:40.224 19:52:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.224 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:40.224 Malloc0 00:21:40.224 19:52:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.224 19:52:21 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:40.224 19:52:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.224 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:40.224 [2024-04-24 19:52:21.722367] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.224 19:52:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.224 19:52:21 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.224 19:52:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.224 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:40.481 19:52:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.481 19:52:21 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:40.481 19:52:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.481 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:40.481 19:52:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.481 19:52:21 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.481 19:52:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.481 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:40.481 [2024-04-24 19:52:21.750669] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.481 19:52:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.481 19:52:21 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:40.481 19:52:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.481 19:52:21 -- common/autotest_common.sh@10 -- # set +x 00:21:40.481 19:52:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:40.481 19:52:21 -- host/target_disconnect.sh@50 -- # reconnectpid=1779593 00:21:40.481 19:52:21 -- host/target_disconnect.sh@52 -- # sleep 2 00:21:40.481 19:52:21 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:40.481 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.389 19:52:23 -- host/target_disconnect.sh@53 -- # kill -9 1779542 00:21:42.389 19:52:23 -- host/target_disconnect.sh@55 -- # sleep 2 00:21:42.389 Read completed with error (sct=0, sc=8) 00:21:42.389 starting I/O failed 00:21:42.389 Read completed with error (sct=0, sc=8) 00:21:42.389 starting I/O failed 00:21:42.389 Read completed with error (sct=0, sc=8) 00:21:42.389 starting I/O failed 00:21:42.389 Read completed with error (sct=0, sc=8) 00:21:42.389 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 [2024-04-24 19:52:23.774652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 [2024-04-24 19:52:23.774985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 [2024-04-24 19:52:23.775332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Write completed with error (sct=0, sc=8) 00:21:42.390 starting I/O failed 00:21:42.390 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Write completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Write completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Write completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Write completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Read completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Write completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 Write completed with error (sct=0, sc=8) 00:21:42.391 starting I/O failed 00:21:42.391 [2024-04-24 19:52:23.775606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:42.391 [2024-04-24 19:52:23.775850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.776081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.776111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.776337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.776525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.776552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.776753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.776915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.776942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.777170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.777426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.777455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.777687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.777845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.777871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.778080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.778282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.778307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.778513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.778706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.778735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.778919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.779126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.779156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.779454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.779699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.779726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.779893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.780059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.780099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.780383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.780608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.780654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.780849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.781041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.781068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.781265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.781618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.781703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.781862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.782057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.782083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.782284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.782688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.782715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.782875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.783144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.783174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.783341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.783642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.783688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.783874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.784216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.784242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.784553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.784775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.784803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.784987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.785249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.785289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.785668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.785855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.785881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.786108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.786331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.786360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.786541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.786756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.786783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.391 qpair failed and we were unable to recover it. 00:21:42.391 [2024-04-24 19:52:23.786957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.787143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.391 [2024-04-24 19:52:23.787168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.787341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.787582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.787610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.787831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.788064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.788090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.788310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.788549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.788578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.788796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.788967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.789000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.789257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.789474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.789504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.789732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.789919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.789965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.790185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.790421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.790447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.790683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.790872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.790898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.791196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.791603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.791703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.791864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.792095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.792120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.792442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.792678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.792705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.792869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.793133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.793161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.793416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.793600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.793639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.793830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.794034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.794082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.794421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.794743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.794770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.794925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.795145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.795171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.795355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.795517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.795544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.795707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.795925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.795952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.796153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.796338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.796382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.796606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.796837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.796864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.797074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.797286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.797328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.797581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.797783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.797811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.798041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.798257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.798306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.798477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.798716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.798743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.798936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.799139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.799166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.799353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.799602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.799646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.799844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.800017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.800044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.800213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.800463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.800512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.392 qpair failed and we were unable to recover it. 00:21:42.392 [2024-04-24 19:52:23.800754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.392 [2024-04-24 19:52:23.800923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.800952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.801151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.801332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.801358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.801566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.801832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.801859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.802042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.802277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.802303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.802512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.802728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.802756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.803018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.803211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.803237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.803407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.803638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.803682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.803851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.804072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.804101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.804394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.804640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.804669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.804874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.805053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.805078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.805309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.805580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.805607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.805763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.805947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.805988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.806191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.806397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.806422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.806649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.806872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.806898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.807117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.807310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.807351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.807558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.807822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.807852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.808053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.808280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.808310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.808541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.808746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.808773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.808955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.809127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.809154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.809359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.809589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.809614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.809795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.809959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.809999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.810204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.810441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.810469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.810673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.810851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.810893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.811101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.811254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.811280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.811494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.811652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.811679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.811875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.812045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.812071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.812266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.812463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.812494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.812693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.812862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.812893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.813108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.813305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.393 [2024-04-24 19:52:23.813330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.393 qpair failed and we were unable to recover it. 00:21:42.393 [2024-04-24 19:52:23.813553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.813793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.813828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.814076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.814254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.814279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.814520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.814696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.814723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.815008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.815267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.815314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.815604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.815844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.815873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.816078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.816311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.816337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.816540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.816764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.816795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.816972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.817174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.817203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.817403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.817574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.817603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.817845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.818064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.818090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.818303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.818556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.818603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.818813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.819036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.819062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.819270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.819533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.819562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.819775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.819942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.819968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.820132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.820328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.820354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.820559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.820751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.820778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.820988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.821198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.821223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.821437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.821641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.821671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.821899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.822253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.822283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.822544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.822800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.822832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.823070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.823371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.823429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.823656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.823841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.823868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.394 qpair failed and we were unable to recover it. 00:21:42.394 [2024-04-24 19:52:23.824128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.824428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.394 [2024-04-24 19:52:23.824453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.824678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.824884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.824921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.825097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.825275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.825299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.825515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.825775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.825802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.826021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.826276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.826301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.826487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.826668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.826706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.826892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.827128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.827153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.827331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.827592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.827617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.827818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.827981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.828007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.828212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.828393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.828436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.828647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.828853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.828884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.829085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.829319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.829347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.829572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.829800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.829829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.830064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.830279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.830305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.830509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.830727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.830754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.830932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.831124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.831150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.831336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.831546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.831571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.831806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.831986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.832011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.832222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.832396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.832423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.832664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.832862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.832890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.833084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.833291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.833317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.833522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.833741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.833769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.834051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.834296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.834321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.834507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.834807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.834837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.835048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.835291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.835332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.835561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.835789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.835815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.836055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.836260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.836286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.836521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.836718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.836746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.395 [2024-04-24 19:52:23.837018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.837244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.395 [2024-04-24 19:52:23.837272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.395 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.837485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.837673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.837700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.837888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.838168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.838194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.838374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.838572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.838597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.838814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.839024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.839049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.839263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.839468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.839512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.839722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.839905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.839956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.840160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.840372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.840400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.840621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.840838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.840863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.841157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.841427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.841456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.841742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.841986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.842013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.842236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.842476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.842505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.842758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.842979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.843005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.843217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.843414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.843440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.843655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.843944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.843970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.844195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.844495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.844558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.844765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.844983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.845009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.845218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.845472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.845518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.845721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.845959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.845999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.846220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.846371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.846397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.846603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.846817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.846843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.847101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.847306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.847333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.847514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.847712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.847739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.847922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.848129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.848176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.848389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.848605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.848651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.848840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.849069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.849093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.849317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.849522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.849552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.849760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.849978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.850004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.850167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.850396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.850421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.396 qpair failed and we were unable to recover it. 00:21:42.396 [2024-04-24 19:52:23.850672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.850881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.396 [2024-04-24 19:52:23.850921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.851139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.851373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.851398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.851578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.851792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.851822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.852032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.852242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.852267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.852474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.852721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.852748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.852981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.853138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.853180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.853377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.853686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.853713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.854035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.854378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.854433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.854667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.854856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.854881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.855082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.855264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.855291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.855510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.855733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.855759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.855967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.856230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.856261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.856480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.856682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.856713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.856916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.857165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.857190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.857400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.857573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.857603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.857803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.858007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.858035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.858247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.858470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.858498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.858684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.858861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.858888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.859083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.859247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.859271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.859448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.859618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.859668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.859849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.860074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.860122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.860324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.860556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.860583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.860884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.861078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.861104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.861277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.861491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.861518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.861684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.861869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.861894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.862206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.862508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.862568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.862827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.863096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.863142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.863319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.863546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.863575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.863753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.863960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.863986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.397 qpair failed and we were unable to recover it. 00:21:42.397 [2024-04-24 19:52:23.864177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.397 [2024-04-24 19:52:23.864415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.864444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.864647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.864882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.864908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.865173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.865466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.865532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.865835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.866199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.866250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.866468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.866704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.866734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.866911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.867122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.867147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.867306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.867504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.867532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.867735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.867941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.867966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.868140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.868343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.868368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.868560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.868841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.868868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.869127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.869342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.869367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.869577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.869805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.869834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.870025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.870201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.870226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.870426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.870598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.870645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.870821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.871036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.871062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.871297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.871546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.871593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.871815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.872056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.872085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.872296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.872545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.872587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.872809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.872987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.873013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.873243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.873441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.873488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.873694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.873903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.873951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.874137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.874360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.874386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.874648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.874833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.874859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.875138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.875382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.875424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.875655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.875880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.875928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.876176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.876366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.876392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.876653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.876866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.876892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.877104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.877301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.877326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.877535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.877788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.877816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.398 qpair failed and we were unable to recover it. 00:21:42.398 [2024-04-24 19:52:23.877985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.398 [2024-04-24 19:52:23.878214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.878243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.878458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.878679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.878710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.878916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.879120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.879147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.879356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.879517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.879546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.879780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.880028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.880054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.880241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.880464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.880489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.880713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.880895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.880934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.881133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.881295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.881321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.881517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.881761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.881808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.882003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.882176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.882205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.882412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.882609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.882645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.882860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.883205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.883235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.883474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.883626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.883675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.883858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.884052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.884077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.884247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.884542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.884572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.884791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.885021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.885046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.885191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.885403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.885443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.885661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.885845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.885871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.886109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.886314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.886361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.886558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.886779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.886806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.887063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.887284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.887319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.887530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.887763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.887790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.887992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.888134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.888159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.399 qpair failed and we were unable to recover it. 00:21:42.399 [2024-04-24 19:52:23.888312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.399 [2024-04-24 19:52:23.888473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.888514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.888748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.888960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.888990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.889155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.889383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.889408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.889586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.889824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.889852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.890127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.890332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.890359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.890570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.890759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.890786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.890965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.891199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.891225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.891443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.891676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.891706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.891887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.892123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.892149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.892421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.892589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.892649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.892888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.893114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.893140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.893317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.893503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.893533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.893723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.893946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.893982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.894219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.894439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.894465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.894665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.894850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.894876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.895089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.895404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.895440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.895636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.895825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.895860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.896102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.896313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.896342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.896544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.896732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.896764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.896989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.897239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.897265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.400 [2024-04-24 19:52:23.897473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.897636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.400 [2024-04-24 19:52:23.897670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.400 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.897878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.898136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.898191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.898417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.898645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.898677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.898917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.899103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.899128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.899314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.899511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.899536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.899757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.899981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.900010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.900241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.900437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.900476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.900673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.900853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.900883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.901108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.901451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.901512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.901763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.901978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.902004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.902216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.902414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.902440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.902656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.902866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.902894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.903114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.903299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.903326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.903566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.903745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.903772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.903982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.904219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.904245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.904472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.904657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.904686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.904912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.905130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.905156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.905359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.905603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.905641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.905827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.906021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.906049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.906216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.906430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.906456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.906612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.906819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.906845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.907025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.907178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.907204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.907385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.907588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.907625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.907844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.908038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.908065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.908340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.908545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.908574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.671 [2024-04-24 19:52:23.908820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.908998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.671 [2024-04-24 19:52:23.909023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.671 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.909278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.909495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.909521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.909717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.909898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.909927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.910153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.910417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.910443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.910676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.910853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.910882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.911075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.911287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.911313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.911501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.911679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.911706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.911926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.912131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.912157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.912335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.912547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.912572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.912747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.912959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.912985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.913221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.913428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.913454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.913612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.913806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.913832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.913989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.914178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.914204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.914415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.914565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.914610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.914853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.915061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.915087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.915319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.915523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.915549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.915737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.915975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.916001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.916191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.916350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.916378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.916583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.916787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.916813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.916972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.917150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.917181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.917376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.917558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.917584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.917796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.917979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.918006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.918192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.918370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.918412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.918643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.918809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.918839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.919048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.919385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.919436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.919717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.919959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.919985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.920139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.920409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.920465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.920677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.920861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.920888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.672 [2024-04-24 19:52:23.921105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.921426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.672 [2024-04-24 19:52:23.921485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.672 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.921738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.921928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.921959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.922149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.922384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.922409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.922634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.922930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.922955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.923164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.923663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.923709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.923916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.924156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.924197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.924401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.924650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.924677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.924890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.925169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.925194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.925446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.925665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.925692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.925916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.926176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.926218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.926466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.926662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.926687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.926874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.927083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.927111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.927342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.927552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.927578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.927801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.928013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.928041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.928263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.928441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.928468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.928685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.928867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.928893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.929117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.929337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.929363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.929541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.929758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.929784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.929969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.930155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.930180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.930396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.930599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.930645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.930860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.931080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.931104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.931326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.931526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.931554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.931846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.932188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.932242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.932450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.932667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.932693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.932879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.933167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.933216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.933397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.933592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.933634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.933894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.934107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.934132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.934322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.934545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.934570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.934794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.934966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.934992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.673 qpair failed and we were unable to recover it. 00:21:42.673 [2024-04-24 19:52:23.935208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.935505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.673 [2024-04-24 19:52:23.935561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.935777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.936009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.936037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.936304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.936500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.936524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.936729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.936929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.936969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.937182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.937406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.937430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.937622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.937791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.937831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.938068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.938456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.938508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.938735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.938920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.938960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.939189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.939373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.939399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.939583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.939875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.939917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.940200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.940570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.940636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.940895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.941233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.941288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.941520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.941732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.941759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.941945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.942146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.942171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.942385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.942557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.942587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.942790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.943003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.943028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.943242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.943476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.943501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.943748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.943923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.943954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.944152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.944397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.944423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.944608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.944823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.944848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.945064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.945266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.945291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.945552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.945836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.945866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.946041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.946212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.946242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.946483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.946720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.946749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.946967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.947181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.947206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.947432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.947636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.947662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.947879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.948039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.948064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.948249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.948462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.948490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.948695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.948847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.948872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.674 qpair failed and we were unable to recover it. 00:21:42.674 [2024-04-24 19:52:23.949128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.674 [2024-04-24 19:52:23.949501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.949526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.949740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.949972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.949997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.950183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.950418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.950443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.950662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.950846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.950872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.951214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.951565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.951616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.951828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.952115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.952191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.952414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.952618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.952663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.952919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.953154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.953184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.953419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.953607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.953639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.953932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.954142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.954167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.954381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.954591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.954616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.954806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.955039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.955064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.955291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.955667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.955727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.955933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.956150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.956176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.956381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.956615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.956655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.956880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.957165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.957190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.957436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.957638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.957667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.957838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.958001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.958030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.958308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.958518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.958546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.958791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.959054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.959106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.959332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.959549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.959575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.959805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.960028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.960057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.960230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.960467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.960528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.675 [2024-04-24 19:52:23.960751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.960953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.675 [2024-04-24 19:52:23.960983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.675 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.961185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.961381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.961406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.961606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.961843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.961872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.962096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.962298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.962324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.962537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.962739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.962768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.962972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.963205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.963233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.963444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.963669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.963708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.963915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.964120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.964161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.964334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.964546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.964576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.964822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.965017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.965042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.965259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.965409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.965449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.965671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.965886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.965915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.966109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.966424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.966475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.966746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.967010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.967038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.967253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.967444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.967484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.967727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.967892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.967918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.968102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.968424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.968471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.968675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.968907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.968948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.969142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.969339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.969368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.969544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.969774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.969800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.969945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.970149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.970174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.970389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.970636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.970677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.970888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.971067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.971093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.971241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.971437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.971461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.971656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.971846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.971871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.972137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.972332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.972357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.972541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.972725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.972765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.972980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.973194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.973220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.973492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.973694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.676 [2024-04-24 19:52:23.973726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.676 qpair failed and we were unable to recover it. 00:21:42.676 [2024-04-24 19:52:23.973957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.974165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.974206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.974453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.974624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.974660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.974865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.975032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.975058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.975247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.975467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.975492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.975707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.975997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.976022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.976237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.976436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.976461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.976656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.976834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.976862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.977041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.977414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.977468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.977756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.977951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.977976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.978199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.978351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.978376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.978577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.978779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.978805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.978986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.979271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.979296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.979567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.979751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.979777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.979998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.980190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.980215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.980431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.980666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.980695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.980931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.981150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.981175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.981366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.981520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.981547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.981755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.981968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.981994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.982194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.982396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.982421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.982633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.982859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.982888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.983099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.983409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.983461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.983674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.983908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.983937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.984138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.984336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.984364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.984569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.984767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.984792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.984980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.985362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.985407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.985692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.985917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.985945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.986143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.986354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.986382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.986618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.986768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.986793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.677 qpair failed and we were unable to recover it. 00:21:42.677 [2024-04-24 19:52:23.986990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.987204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.677 [2024-04-24 19:52:23.987233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.987431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.987649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.987676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.987867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.988236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.988291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.988513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.988715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.988744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.988923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.989140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.989189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.989399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.989594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.989622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.989862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.990051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.990076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.990290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.990486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.990512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.990711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.990934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.990959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.991140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.991362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.991410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.991692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.991979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.992031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.992237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.992404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.992428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.992637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.992872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.992900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.993116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.993381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.993433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.993657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.993880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.993906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.994081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.994288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.994313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.994525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.994734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.994761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.994961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.995125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.995152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.995380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.995555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.995584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.995798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.995983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.996008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.996215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.996380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.996409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.996639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.996834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.996862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.997074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.997289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.997313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.997556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.997753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.997795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.997976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.998259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.998321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.998572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.998790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.998820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.999039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.999262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.999287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.999510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.999722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:23.999748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:23.999934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:24.000249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:24.000311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.678 qpair failed and we were unable to recover it. 00:21:42.678 [2024-04-24 19:52:24.000517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.678 [2024-04-24 19:52:24.000726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.000755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.000950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.001296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.001352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.001638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.001870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.001898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.002135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.002479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.002532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.002731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.002954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.002980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.003189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.003398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.003453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.003672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.003869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.003895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.004083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.004352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.004398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.004584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.004792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.004818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.005029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.005200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.005230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.005469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.005673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.005702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.005934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.006195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.006220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.006443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.006670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.006696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.006923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.007308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.007366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.007653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.007876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.007905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.008139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.009376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.009411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.009654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.009827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.009857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.010043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.010277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.010303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.010499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.010719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.010750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.010985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.011531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.011567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.011859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.012122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.012169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.012398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.012622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.012656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.012869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.013249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.013308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.013516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.013839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.013868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.014105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.014305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.014352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.014546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.014731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.014761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.014971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.015174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.015202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.015421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.015713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.015742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.015931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.016242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.679 [2024-04-24 19:52:24.016305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.679 qpair failed and we were unable to recover it. 00:21:42.679 [2024-04-24 19:52:24.016571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.016786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.016815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.017048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.017447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.017507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.017751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.017961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.018009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.018204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.018495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.018544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.018775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.019027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.019074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.019303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.019503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.019531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.019737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.019921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.019955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.020204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.020407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.020436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.020657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.020821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.020849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.021049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.021329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.021358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.021581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.021802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.021831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.022068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.022265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.022294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.022475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.022682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.022709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.022932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.023107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.023133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.023325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.023575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.023606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.023871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.024089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.024118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.024315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.024552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.024592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.024804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.025039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.025067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.025302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.025510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.025549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.025764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.025952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.025978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.026149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.026364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.026388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.026585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.026787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.026814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.026972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.027136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.027163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.027348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.027588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.680 [2024-04-24 19:52:24.027617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.680 qpair failed and we were unable to recover it. 00:21:42.680 [2024-04-24 19:52:24.027861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.028062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.028088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.028248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.028462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.028520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.028804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.029012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.029038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.029295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.029473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.029498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.029681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.029864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.029889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.030103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.030267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.030293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.030444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.030695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.030722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.030912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.031111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.031137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.031346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.031534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.031559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.031779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.031964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.031989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.032166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.032353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.032395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.032599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.032797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.032823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.033011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.033201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.033228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.033399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.033602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.033638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.033839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.034030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.034055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.034265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.034452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.034477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.034663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.034848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.034875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.035060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.035278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.035324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.035506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.035730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.035757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.035917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.036118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.036147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.036354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.036553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.036582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.036738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.036914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.036955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.037350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.037590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.037616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.037785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.037985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.038013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.038246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.038414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.038442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.038645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.038820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.038845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.039046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.039311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.039356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.039569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.039728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.039755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.681 [2024-04-24 19:52:24.039948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.040184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.681 [2024-04-24 19:52:24.040209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.681 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.040389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.040576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.040601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.040830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.041042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.041094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.041323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.041526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.041551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.041721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.041896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.041936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.042226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.042481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.042507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.042667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.042830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.042856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.043070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.043323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.043353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.043555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.043769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.043796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.043954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.044175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.044220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.044455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.044643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.044669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.044846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.045102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.045147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.045345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.045557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.045592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.045777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.045954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.045983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.046175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.046403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.046448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.046639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.046802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.046828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.047041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.047292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.047336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.047518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.047739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.047786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.047952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.048162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.048189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.048348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.048534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.048561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.048795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.048990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.049033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.049261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.049449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.049476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.049711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.049912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.049964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.050199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.050440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.050466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.682 qpair failed and we were unable to recover it. 00:21:42.682 [2024-04-24 19:52:24.050654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.682 [2024-04-24 19:52:24.050830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.050878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.051116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.051331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.051358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.051555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.051741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.051768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.051932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.052114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.052142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.052345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.052526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.052553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.052750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.052959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.053002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.053213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.053368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.053394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.053603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.053820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.053848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.054061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.054272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.054298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.054494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.054656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.054686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.054872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.055125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.055169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.055358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.055566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.055592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.055800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.056021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.056048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.056287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.056498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.056524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.056736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.056948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.056999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.057231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.057405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.057432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.057612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.057848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.057892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.058126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.058352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.058379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.058560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.058760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.058806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.058973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.059199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.059243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.059461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.059685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.059715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.059939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.060191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.060235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.060451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.060665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.060695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.060885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.061092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.061134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.061318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.061516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.061543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.061736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.061894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.061921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.062132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.062324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.683 [2024-04-24 19:52:24.062350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.683 qpair failed and we were unable to recover it. 00:21:42.683 [2024-04-24 19:52:24.062542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.062787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.062833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.063009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.063238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.063267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.063485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.063645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.063672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.063859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.064094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.064136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.064328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.064540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.064567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.064767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.064975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.065018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.065259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.065411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.065437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.065599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.065830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.065875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.066076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.066336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.066363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.066547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.066740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.066786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.066983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.067221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.067269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.067488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.067689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.067718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.068755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.068964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.068993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.069234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.069475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.069509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.069690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.069894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.069943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.070189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.070400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.070427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.070643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.071415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.071446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.071682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.072521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.072558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.072760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.073473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.073504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.073724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.073916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.073966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.074214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.074460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.074487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.074660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.074876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.074907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.075153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.075382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.075409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.075574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.075773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.075819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.076039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.076259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.076308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.076517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.076730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.076777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.076965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.077204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.077249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.684 [2024-04-24 19:52:24.077438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.077655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.684 [2024-04-24 19:52:24.077683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.684 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.077873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.078120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.078163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.078394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.078539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.078566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.078765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.078977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.079022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.079231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.079444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.079471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.079633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.079821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.079871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.080089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.080267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.080294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.080453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.080680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.080712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.080972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.081212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.081239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.081438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.081668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.081700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.081916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.082176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.082219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.082370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.082556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.082583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.082777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.083656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.083697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.083915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.084754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.084784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.084993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.085220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.085248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.085442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.085636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.085663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.085853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.086081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.086129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.086375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.086529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.086558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.685 [2024-04-24 19:52:24.086751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.086960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.685 [2024-04-24 19:52:24.087005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.685 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.087221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.087432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.087460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.087678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.087870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.087897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.088092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.088276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.088304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.088503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.088668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.088696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.088901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.089125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.089169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.089376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.089584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.089611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.089803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.090039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.090083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.090309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.090491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.090517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.090725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.090914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.090940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.091125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.091306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.091333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.091524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.092214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.092250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.092461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.093308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.093338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.093536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.093709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.093738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.093941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.094166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.094191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.094386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.094662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.094706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.094902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.095143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.095189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.095364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.095638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.095666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.095914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.096115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.096162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.096374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.096617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.096656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.096869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.097080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.097127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.097348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.097540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.097566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.097756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.097941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.097967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.098155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.098344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.098370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.098578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.098781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.098827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.099045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.099276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.099303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.099487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.099709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.099757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.099946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.100133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.100159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.100318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.100475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.100501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.686 qpair failed and we were unable to recover it. 00:21:42.686 [2024-04-24 19:52:24.100681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.686 [2024-04-24 19:52:24.100870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.100897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.101082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.101322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.101349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.101640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.101826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.101870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.102081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.102330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.102375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.102592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.102796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.102840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.103023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.103214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.103257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.103462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.103676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.103705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.103888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.104112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.104155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.104343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.104572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.104598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.104802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.105033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.105076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.105338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.105551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.105577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.105772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.105959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.106002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.106179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.106403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.106448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.106750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.106932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.106976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.107193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.107444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.107488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.107677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.107833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.107860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.108054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.108330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.108372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.108564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.108751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.108778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.108967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.109158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.109206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.109431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.109611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.109647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.109836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.110094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.110137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.687 [2024-04-24 19:52:24.110304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.110491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.687 [2024-04-24 19:52:24.110517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.687 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.110717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.110927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.110971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.111200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.111405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.111449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.111660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.111827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.111853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.112059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.112317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.112363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.112557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.112727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.112753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.112937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.113168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.113219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.113425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.113632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.113664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.113827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.114038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.114080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.114299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.114467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.114492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.114709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.114914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.114957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.115191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.115415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.115459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.115641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.115831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.115875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.116101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.116348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.116391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.116601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.116786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.116811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.116989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.117212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.117267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.117516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.117746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.117773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.117954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.118208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.118256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.118485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.118742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.118786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.118977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.119169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.119211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.119410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.119639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.119665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.119828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.119987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.120014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.120254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.120472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.120517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.120701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.120958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.121004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.121240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.121466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.121509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.121706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.121929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.121977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.122189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.122440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.122468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.122692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.122895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.122942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.123120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.123346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.123388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.688 qpair failed and we were unable to recover it. 00:21:42.688 [2024-04-24 19:52:24.123556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.123777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.688 [2024-04-24 19:52:24.123821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.124077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.124334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.124377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.124563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.124786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.124812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.125005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.125252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.125295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.125476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.125639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.125665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.125858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.126094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.126122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.126373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.126537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.126572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.126793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.127000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.127043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.127234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.127452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.127494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.127720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.127915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.127942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.128154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.128408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.128451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.128639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.128846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.128871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.129114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.129288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.129314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.129470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.129678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.129705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.129917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.130160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.130187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.130392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.130589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.130615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.130784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.130987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.131031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.131277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.131478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.131504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.131668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.131875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.131917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.132161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.132426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.132467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.132669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.132894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.132936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.133200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.133370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.133396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.133580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.133794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.133837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.134039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.134266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.134312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.134496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.134697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.134742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.134950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.135166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.135209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.689 [2024-04-24 19:52:24.135394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.135592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.689 [2024-04-24 19:52:24.135618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.689 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.135867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.136108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.136136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.136327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.136553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.136578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.136801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.136998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.137041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.137277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.137524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.137570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.137788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.138022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.138050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.138251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.138471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.138513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.138739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.138958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.139001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.139229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.139427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.139454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.139605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.139826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.139872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.140117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.140333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.140375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.140568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.140730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.140756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.140948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.141181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.141209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.141465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.141745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.141774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.141980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.142205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.142249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.142570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.142782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.142808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.142990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.143207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.143249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.143441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.143639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.143666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.690 [2024-04-24 19:52:24.143877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.144086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.690 [2024-04-24 19:52:24.144127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.690 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.144346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.144559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.144585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.144754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.144957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.145000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.145168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.145390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.145433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.145615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.145775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.145802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.146033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.146265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.146307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.146517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.146727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.146753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.146960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.147222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.147264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.147446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.147650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.147676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.147903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.148158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.148201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.148417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.148590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.148615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.148778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.149018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.149061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.149271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.149496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.149522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.149688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.149925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.149967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.150147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.150400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.150442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.150637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.150816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.150841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.151041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.151264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.151308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.151496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.151688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.151715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.151924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.152149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.152192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.152365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.152590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.152615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.152780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.152991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.153034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.153254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.153481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.153523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.153713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.153900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.153943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.154187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.154408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.154436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.154606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.154762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.154788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.155029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.155251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.155294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.155474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.155722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.155765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.155973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.156193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.156241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.156449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.156620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.156652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.691 qpair failed and we were unable to recover it. 00:21:42.691 [2024-04-24 19:52:24.156861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.691 [2024-04-24 19:52:24.157078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.157108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.157334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.157535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.157560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.157752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.157985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.158013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.158233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.158454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.158502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.158738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.158968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.159012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.159217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.159416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.159441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.159633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.159842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.159886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.160094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.160290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.160332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.160520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.160731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.160775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.160999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.161236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.161284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.161497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.161691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.161720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.161934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.162186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.162229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.162445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.162626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.162657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.162840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.163022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.163064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.163271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.163498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.163540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.163748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.163928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.163970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.164177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.164393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.164436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.164594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.164783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.164810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.165057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.165276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.165319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.165501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.165689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.165715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.165949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.166157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.166200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.166408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.166589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.166615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.166825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.167082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.167125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.167335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.167532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.167557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.167770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.167977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.168019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.168228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.168422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.168465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.168653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.168838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.168882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.169071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.169292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.169335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.692 qpair failed and we were unable to recover it. 00:21:42.692 [2024-04-24 19:52:24.169525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.692 [2024-04-24 19:52:24.169759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.169804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.693 qpair failed and we were unable to recover it. 00:21:42.693 [2024-04-24 19:52:24.170010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.170228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.170272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.693 qpair failed and we were unable to recover it. 00:21:42.693 [2024-04-24 19:52:24.170427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.170612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.170643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.693 qpair failed and we were unable to recover it. 00:21:42.693 [2024-04-24 19:52:24.170858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.171049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.171093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.693 qpair failed and we were unable to recover it. 00:21:42.693 [2024-04-24 19:52:24.171334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.171534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.171559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.693 qpair failed and we were unable to recover it. 00:21:42.693 [2024-04-24 19:52:24.171745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.171947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.171989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.693 qpair failed and we were unable to recover it. 00:21:42.693 [2024-04-24 19:52:24.172196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.172392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.172417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.693 qpair failed and we were unable to recover it. 00:21:42.693 [2024-04-24 19:52:24.172580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.172801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.693 [2024-04-24 19:52:24.172845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.693 qpair failed and we were unable to recover it. 00:21:42.693 [2024-04-24 19:52:24.173076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.173282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.173326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.173510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.173751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.173780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.174031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.174264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.174307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.174494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.174696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.174740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.174961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.175170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.175198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.175417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.175618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.175648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.175827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.176076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.176119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.176363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.176555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.176580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.176750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.176927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.176971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.177218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.177415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.177459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.177670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.177883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.177926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.178107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.178334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.178376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.178527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.178758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.178801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.179035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.179341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.179383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.968 qpair failed and we were unable to recover it. 00:21:42.968 [2024-04-24 19:52:24.179543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.179751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.968 [2024-04-24 19:52:24.179794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.179976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.180234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.180277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.180489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.180733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.180778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.181011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.181305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.181357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.181566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.181728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.181755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.181968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.182194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.182237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.182436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.182613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.182646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.182827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.183012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.183055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.183299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.183497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.183523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.183696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.183919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.183962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.184149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.184451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.184511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.184738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.184962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.185004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.185182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.185408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.185451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.185610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.185834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.185878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.186088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.186328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.186377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.186585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.186790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.186833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.187019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.187264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.187310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.187514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.187715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.187741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.187977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.188183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.188229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.188412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.188613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.188643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.188853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.189037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.189081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.189288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.189511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.189554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.189768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.190004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.190033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.190231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.190650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.190708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.190916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.191148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.191190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.191403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.191602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.191634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.191786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.192020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.192068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.192285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.192612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.192687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.192897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.193138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.193182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.969 qpair failed and we were unable to recover it. 00:21:42.969 [2024-04-24 19:52:24.193386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.969 [2024-04-24 19:52:24.193564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.193589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.193812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.194023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.194051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.194307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.194503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.194528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.194733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.194942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.194988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.195199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.195419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.195466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.195625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.195841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.195867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.196077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.196272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.196316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.196498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.196724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.196771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.196978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.197199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.197242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.197426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.197605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.197640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.197874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.198199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.198245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.198474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.198737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.198763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.199002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.199243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.199291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.199453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.199661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.199687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.199897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.200113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.200156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.200375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.200599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.200624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.200837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.201066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.201109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.201347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.201545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.201570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.201759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.201993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.202036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.202235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.202432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.202460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.202683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.202873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.202899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.203106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.203356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.203398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.203578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.203754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.203781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.203994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.204240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.204284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.204492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.204766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.204796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.205052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.205348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.205397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.205582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.205788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.205814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.206013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.206244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.206286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.206510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.206741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.206784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.970 qpair failed and we were unable to recover it. 00:21:42.970 [2024-04-24 19:52:24.207021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.970 [2024-04-24 19:52:24.207268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.207315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.207495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.207691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.207735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.207945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.208196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.208239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.208428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.208588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.208613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.208831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.209050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.209093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.209312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.209489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.209516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.209689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.209975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.210018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.210231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.210484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.210527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.210732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.211066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.211094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.211330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.211498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.211524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.211697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.211958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.211984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.212219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.212387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.212412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.212566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.212789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.212816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.212997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.213213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.213259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.213468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.213647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.213673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.213880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.214103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.214146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.214352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.214556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.214582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.214761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.215014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.215057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.215268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.215467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.215494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.215679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.215912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.215955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.216122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.216340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.216386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.216589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.216761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.216787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.216967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.217194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.217238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.217446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.217653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.217680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.217862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.218118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.218160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.218375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.218577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.218602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.218814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.219067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.219110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.219323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.219518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.219543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.219751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.219964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.220008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.971 qpair failed and we were unable to recover it. 00:21:42.971 [2024-04-24 19:52:24.220224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.971 [2024-04-24 19:52:24.220538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.220598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.220818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.221035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.221078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.221286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.221490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.221516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.221723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.221950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.221992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.222200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.222374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.222401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.222583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.222792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.222835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.223038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.223293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.223334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.223496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.223656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.223689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.223895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.224125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.224167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.224408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.224638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.224664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.224829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.225063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.225106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.225310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.225529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.225554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.225766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.226019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.226062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.226251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.226412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.226437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.226594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.226838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.226882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.227107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.227314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.227357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.227566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.227754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.227782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.228028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.228253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.228298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.228500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.228683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.228709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.228943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.229187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.229229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.229443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.229677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.229704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.229912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.230130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.230172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.230407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.230634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.230660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.230845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.231086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.231114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.231376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.231574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.231599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.231766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.231949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.231991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.972 qpair failed and we were unable to recover it. 00:21:42.972 [2024-04-24 19:52:24.232238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.232469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.972 [2024-04-24 19:52:24.232496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.232699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.233063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.233111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.233316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.233539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.233564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.233741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.233981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.234009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.234247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.234481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.234524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.234738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.235072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.235135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.235349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.235543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.235568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.235773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.235997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.236041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.236259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.236460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.236486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.236642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.236852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.236898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.237104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.237446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.237503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.237659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.237893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.237937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.238135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.238359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.238401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.238582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.238763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.238789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.239025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.239312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.239363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.239572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.239779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.239805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.239988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.240195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.240222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.240406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.240608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.240641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.240850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.241055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.241097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.241306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.241525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.241550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.241730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.241943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.241986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.242205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.242553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.242606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.242797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.243008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.243049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.243283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.243616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.243681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.243891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.244078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.244122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.244358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.244559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.244584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.973 qpair failed and we were unable to recover it. 00:21:42.973 [2024-04-24 19:52:24.244761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.973 [2024-04-24 19:52:24.245005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.245033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.245281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.245496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.245538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.245720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.245956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.245999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.246182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.246419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.246447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.246616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.246808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.246834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.247031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.247257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.247302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.247538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.247715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.247741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.247945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.248195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.248238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.248442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.248650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.248676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.248860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.249043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.249086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.249317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.249485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.249512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.249740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.250062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.250120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.250355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.250554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.250579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.250756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.250955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.251000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.251184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.251404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.251446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.251618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.251776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.251801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.252027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.252357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.252422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.252639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.252817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.252843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.253076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.253306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.253336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.253520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.253702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.253729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.253935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.254274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.254334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.254547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.254729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.254756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.254955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.255168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.255194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.255438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.255638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.255664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.255867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.256065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.256107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.256307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.256545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.256571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.256781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.256970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.257013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.257244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.257609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.257667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.257850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.258085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.258132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.974 qpair failed and we were unable to recover it. 00:21:42.974 [2024-04-24 19:52:24.258341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.974 [2024-04-24 19:52:24.258540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.258565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.258772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.258978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.259020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.259237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.259470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.259497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.259726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.259928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.259970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.260186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.260448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.260474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.260688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.260892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.260920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.261169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.261503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.261565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.261778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.261997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.262040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.262233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.262405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.262430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.262642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.262821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.262851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.263060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.263251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.263295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.263482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.263694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.263741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.263941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.264159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.264201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.264439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.264611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.264642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.264829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.265011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.265055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.265259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.265489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.265515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.265744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.265948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.265973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.266184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.266434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.266477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.266650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.266986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.267039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.267252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.267564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.267616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.267837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.268044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.268086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.268317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.268514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.268539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.268725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.268961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.268989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.269208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.269423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.269466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.269619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.269831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.269856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.270053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.270303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.270346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.270532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.270686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.270712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.270917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.271166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.271209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.271451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.271625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.271659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.975 qpair failed and we were unable to recover it. 00:21:42.975 [2024-04-24 19:52:24.271842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.975 [2024-04-24 19:52:24.272046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.272088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.272280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.272594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.272653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.272833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.273068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.273096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.273347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.273547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.273572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.273755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.273957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.273999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.274183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.274570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.274625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.274844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.275041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.275082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.275260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.275459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.275501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.275722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.275985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.276028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.276207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.276397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.276439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.276615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.276834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.276880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.277094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.277357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.277383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.277593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.277802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.277845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.278055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.278431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.278456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.278666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.278852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.278895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.279126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.279426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.279452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.279644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.279917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.279961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.280164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.280449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.280497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.280697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.281035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.281094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.281318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.281519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.281544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.281726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.281981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.282024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.282248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.282456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.282481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.282694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.282946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.282989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.283184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.283381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.283406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.283592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.283807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.283857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.284094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.284488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.284546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.284788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.285015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.285059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.285259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.285456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.285483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.285746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.285979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.286021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.976 qpair failed and we were unable to recover it. 00:21:42.976 [2024-04-24 19:52:24.286220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.976 [2024-04-24 19:52:24.286415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.286441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.286645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.286825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.286868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.287083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.287309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.287351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.287527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.287733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.287781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.287968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.288193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.288236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.288414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.288598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.288623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.288848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.289044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.289089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.289330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.289504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.289529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.289703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.289942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.289986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.290204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.290405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.290432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.290622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.290865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.290893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.291092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.291323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.291350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.291567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.291770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.291817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.292028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.292250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.292293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.292446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.292648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.292674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.292887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.293114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.293156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.293365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.293546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.293571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.293777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.294000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.294043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.294253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.294449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.294474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.294594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16dd860 is same with the state(5) to be set 00:21:42.977 [2024-04-24 19:52:24.294891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.295085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.295119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.295329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.295570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.295596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.295791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.295991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.296026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.296229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.296454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.296482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.296697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.296854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.977 [2024-04-24 19:52:24.296897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.977 qpair failed and we were unable to recover it. 00:21:42.977 [2024-04-24 19:52:24.297099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.297298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.297328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.297578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.297756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.297785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.297984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.298181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.298209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.298410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.298618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.298656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.298862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.299029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.299059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.299286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.299486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.299516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.299749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.299927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.299955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.300130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.300353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.300381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.300547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.300759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.300802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.301007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.301178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.301206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.301410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.301590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.301615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.301834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.302067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.302095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.302292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.302499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.302525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.302710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.302917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.302946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.303110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.303335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.303363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.303560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.303825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.303854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.304199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.304603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.304673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.304880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.305095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.305123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.305352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.305550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.305579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.305775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.305982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.306011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.306261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.306554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.306605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.306795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.306999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.307027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.307295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.307462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.307490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.307708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.307865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.307890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.308075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.308275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.308304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.308560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.308767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.308794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.308975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.309212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.309240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.309614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.309846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.309872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.310081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.310308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.978 [2024-04-24 19:52:24.310336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.978 qpair failed and we were unable to recover it. 00:21:42.978 [2024-04-24 19:52:24.310568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.310757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.310784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.311023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.311187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.311215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.311441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.311657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.311684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.311866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.312024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.312051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.312254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.312437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.312464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.312653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.312805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.312846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.313029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.313255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.313283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.313514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.313697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.313724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.313902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.314162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.314216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.314455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.314713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.314739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.314898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.315111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.315139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.315367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.315571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.315599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.315811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.316009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.316037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.316210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.316388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.316415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.316597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.316829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.316858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.317088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.317243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.317269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.317453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.317641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.317667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.317850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.318079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.318108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.318332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.318528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.318556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.318749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.318915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.318941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.319118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.319348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.319378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.319573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.319779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.319805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.319983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.320171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.320197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.320383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.320608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.320644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.320840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.321016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.321044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.321232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.321401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.321426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.321607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.321794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.321821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.322026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.322225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.322253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.979 [2024-04-24 19:52:24.322430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.322657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.979 [2024-04-24 19:52:24.322684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.979 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.322844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.323032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.323057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.323259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.323460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.323488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.323717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.323889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.323917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.324128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.324312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.324337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.324499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.324696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.324726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.324931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.325135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.325160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.325310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.325497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.325538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.325723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.325890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.325916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.326147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.326371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.326399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.326625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.326820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.326846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.327070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.327243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.327271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.327484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.327662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.327706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.327888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.328045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.328071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.328231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.328410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.328435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.328657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.328866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.328894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.329132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.329425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.329450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.329636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.329800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.329824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.330027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.330339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.330399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.330577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.330786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.330812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.330974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.331266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.331323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.331539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.331770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.331796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.331978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.332139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.332164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.332355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.332519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.332543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.332789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.333003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.333030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.333255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.333416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.333442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.333623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.333805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.333832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.334033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.334231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.334260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.334482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.334699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.334725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.980 [2024-04-24 19:52:24.334917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.335261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.980 [2024-04-24 19:52:24.335316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.980 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.335514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.335704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.335730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.335916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.336100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.336124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.336401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.336663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.336689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.336849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.337086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.337111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.337318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.337502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.337527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.337744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.337983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.338034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.338219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.338376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.338401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.338555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.338796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.338824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.339028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.339200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.339228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.339420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.339584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.339613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.339817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.339972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.339997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.340178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.340575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.340639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.340845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.341026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.341051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.341210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.341390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.341414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.341600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.341788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.341816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.341977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.342160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.342185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.342368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.342511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.342536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.342718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.342869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.342893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.343091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.343388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.343412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.343592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.343756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.343781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.343967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.344251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.344310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.344531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.344701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.344730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.344927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.345111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.345136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.345312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.345472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.345497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.345664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.345883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.345911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.346115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.346325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.346350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.346567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.346744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.346772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.346969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.347166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.347193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.981 qpair failed and we were unable to recover it. 00:21:42.981 [2024-04-24 19:52:24.347368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.347561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.981 [2024-04-24 19:52:24.347588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.347798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.348012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.348037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.348188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.348396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.348424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.348639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.348822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.348848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.349039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.349252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.349277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.349489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.349767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.349795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.349990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.350147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.350172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.350331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.350541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.350566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.350751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.350935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.350977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.351186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.351367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.351391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.351578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.351770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.351797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.351979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.352167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.352195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.352401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.352602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.352639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.352871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.353043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.353073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.353282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.353459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.353501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.353739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.353902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.353927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.354110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.354287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.354313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.354471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.354710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.354736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.354914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.355071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.355096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.355257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.355467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.355495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.355672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.355833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.355858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.356039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.356324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.356382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.356623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.356810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.356835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.356989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.357204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.357237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.357428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.357612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.357645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.982 qpair failed and we were unable to recover it. 00:21:42.982 [2024-04-24 19:52:24.357859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.982 [2024-04-24 19:52:24.358142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.358190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.358415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.358603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.358637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.358822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.359089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.359139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.359366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.359620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.359654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.359862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.360124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.360176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.360368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.360535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.360563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.360749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.360991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.361044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.361229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.361436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.361460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.361644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.361811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.361843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.362026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.362206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.362232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.362382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.362535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.362561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.362809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.362969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.362994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.363174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.363461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.363510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.363715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.363900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.363925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.364106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.364263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.364287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.364435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.364674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.364700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.364882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.365063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.365088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.365275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.365448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.365472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.365651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.365856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.365889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.366093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.366289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.366316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.366487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.366650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.366676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.366837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.367040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.367065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.367270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.367464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.367489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.367666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.367814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.367839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.368058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.368296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.368325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.368525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.368731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.368761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.983 qpair failed and we were unable to recover it. 00:21:42.983 [2024-04-24 19:52:24.368995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.983 [2024-04-24 19:52:24.369205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.369230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.369390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.369545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.369571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.369798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.370051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.370076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.370275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.370458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.370484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.370729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.370894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.370935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.371107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.371378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.371426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.371639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.371798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.371823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.371972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.372150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.372176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.372405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.372587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.372611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.372802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.372956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.372982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.373176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.373371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.373425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.373636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.373836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.373863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.374061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.374268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.374297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.374534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.374736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.374765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.374965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.375150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.375174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.375403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.375636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.375665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.375831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.376107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.376158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.376348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.376536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.376564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.376769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.376946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.376971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.377129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.377290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.377315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.377495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.377669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.377698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.377883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.378056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.378081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.378233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.378390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.378414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.378573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.378750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.378776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.378926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.379112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.379137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.379359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.379506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.379531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.379731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.379927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.379952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.380147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.380303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.984 [2024-04-24 19:52:24.380328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.984 qpair failed and we were unable to recover it. 00:21:42.984 [2024-04-24 19:52:24.380555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.380758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.380783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.380957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.381118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.381144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.381329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.381485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.381510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.381676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.381860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.381886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.382128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.382462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.382487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.382713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.382893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.382918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.383074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.383223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.383264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.383471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.383654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.383680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.383860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.384041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.384066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.384277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.384466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.384494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.384696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.384859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.384887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.385061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.385246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.385272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.385424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.385606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.385638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.385823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.386067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.386115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.386340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.386496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.386520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.386699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.386905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.386930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.387112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.387296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.387321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.387497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.387701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.387730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.387935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.388266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.388310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.388510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.388720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.388746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.388904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.389079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.389104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.389288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.389460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.389488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.389727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.389903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.389929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.390091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.390352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.390401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.390605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.390823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.390848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.985 [2024-04-24 19:52:24.391062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.391362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.985 [2024-04-24 19:52:24.391387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.985 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.391588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.391768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.391794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.391973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.392148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.392176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.392383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.392560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.392585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.392753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.392962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.392989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.393146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.393330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.393355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.393567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.393729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.393755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.393940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.394118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.394145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.394328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.394477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.394502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.394653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.394862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.394890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.395089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.395266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.395293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.395481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.395702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.395731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.395910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.396109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.396136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.396322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.396499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.396524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.396701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.396854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.396879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.397035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.397216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.397243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.397395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.397602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.397636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.397793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.397972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.397997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.398170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.398404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.398453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.398691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.398851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.398876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.399088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.399382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.399407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.399569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.399758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.399783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.399963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.400168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.400196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.400381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.400582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.400607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.400788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.400970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.400995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.986 qpair failed and we were unable to recover it. 00:21:42.986 [2024-04-24 19:52:24.401174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.401413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.986 [2024-04-24 19:52:24.401463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.401671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.401872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.401900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.402127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.402338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.402363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.402547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.402730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.402756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.402935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.403119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.403144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.403352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.403582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.403611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.403808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.404014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.404043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.404210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.404462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.404513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.404724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.404914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.404939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.405117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.405275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.405301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.405480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.405665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.405690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.405834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.405979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.406003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.406208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.406390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.406415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.406651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.406876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.406905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.407100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.407349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.407374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.407588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.407775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.407800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.408006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.408336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.408362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.408518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.408664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.408690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.408874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.409049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.409074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.409253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.409429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.409454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.409666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.409848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.409872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.410053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.410229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.410254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.410408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.410574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.410602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.410846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.411029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.411053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.411254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.411461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.411486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.987 [2024-04-24 19:52:24.411705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.411895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.987 [2024-04-24 19:52:24.411920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.987 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.412148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.412429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.412477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.412676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.412883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.412908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.413068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.413222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.413262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.413462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.413666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.413692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.413900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.414050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.414075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.414255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.414479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.414527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.414713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.414903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.414946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.415123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.415298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.415324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.415505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.415691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.415717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.415896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.416102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.416127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.416368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.416565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.416593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.416800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.416958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.416983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.417179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.417400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.417452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.417684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.417866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.417891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.418071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.418252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.418278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.418434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.418640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.418669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.418855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.419044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.419069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.419228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.419381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.419407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.419585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.419786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.419815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.420019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.420176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.420205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.420384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.420549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.420576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.420789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.420943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.420968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.421119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.421331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.421389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.421591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.421798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.421825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.421992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.422173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.422198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.422379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.422529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.422572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.422752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.422953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.422983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.423179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.423389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.423414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.988 [2024-04-24 19:52:24.423597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.423790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.988 [2024-04-24 19:52:24.423816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.988 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.423989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.424331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.424393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.424580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.424785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.424811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.424996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.425179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.425204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.425364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.425552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.425579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.425781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.425963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.426004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.426182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.426337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.426362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.426559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.426725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.426754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.426926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.427127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.427151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.427304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.427465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.427490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.427674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.427881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.427910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.428102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.428373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.428431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.428662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.428843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.428868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.429052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.429228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.429253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.429485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.429690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.429743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.429949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.430130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.430155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.430370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.430536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.430565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.430756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.430957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.430984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.431185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.431387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.431428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.431614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.431776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.431801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.431954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.432206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.432255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.432479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.432667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.432697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.432856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.433119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.433170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.433401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.433553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.433594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.433823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.434000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.434028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.434256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.434439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.434463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.989 qpair failed and we were unable to recover it. 00:21:42.989 [2024-04-24 19:52:24.434616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.434848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.989 [2024-04-24 19:52:24.434875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.435083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.435270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.435296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.435473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.435635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.435661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.435820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.436069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.436095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.436271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.436527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.436579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.436799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.436948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.436974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.437140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.437347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.437389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.437570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.437795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.437824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.438055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.438373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.438423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.438626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.438821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.438846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.439003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.439240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.439289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.439513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.439714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.439740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.439953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.440295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.440346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.440548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.440758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.440784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.440947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.441151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.441180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.441381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.441580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.441608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.441848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.442010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.442037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.442245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.442521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.442567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.442745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.442941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.442969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.443198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.443387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.443413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.443573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.443834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.443885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.444086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.444326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.444377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.444576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.444833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.444884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.445074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.445233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.445258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.445440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.445653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.445682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.445887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.446079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.446104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.446294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.446477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.446502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.446709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.446933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.446993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.447219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.447494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.447520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.990 qpair failed and we were unable to recover it. 00:21:42.990 [2024-04-24 19:52:24.447705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.990 [2024-04-24 19:52:24.447915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.447940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.448140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.448446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.448471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.448707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.448890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.448915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.449149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.449391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.449444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.449671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.449872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.449901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.450137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.450346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.450371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.450529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.450737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.450763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.450979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.451242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.451267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.451428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.451585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.451612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.451823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.452048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.452100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.452326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.452500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.452527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.452712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.452931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.452956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.453137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.453300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.453325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.453501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.453727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.453770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.453944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.454126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.454151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.454377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.454553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.454582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.454797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.454963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.454993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.455210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.455362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.455387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.455548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.455732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.455759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.455941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.456254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.456304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.456510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.456687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.456713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.456911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.457097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.457121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.457303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.457452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.457477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.457664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.457866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.457894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.991 qpair failed and we were unable to recover it. 00:21:42.991 [2024-04-24 19:52:24.458091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.458325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.991 [2024-04-24 19:52:24.458371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.458554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.458751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.458777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.458990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.459148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.459175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.459381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.459580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.459609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.459848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.460088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.460113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.460269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.460444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.460469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.460636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.460841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.460868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.461066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.461265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.461295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.461516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.461679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.461709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.461896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.462070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.462094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.462272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.462464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.462493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.462672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.462846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.462874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.463077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.463258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.463286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.463471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.463634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.463661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.463850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.464038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.464083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.464265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.464443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.464470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.464652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.464850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.464876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.465052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.465320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.992 [2024-04-24 19:52:24.465367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:42.992 qpair failed and we were unable to recover it. 00:21:42.992 [2024-04-24 19:52:24.465596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.465814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.465840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.466013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.466172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.466197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.466386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.466535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.466561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.466741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.466907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.466936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.467095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.467264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.467290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4448000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.467489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.467678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.467707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.467903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.468116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.468159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.468373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.468602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.468636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.468871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.469069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.469095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.469303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.469524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.469554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.469763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.469999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.470042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.470393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.470591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.470617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.470818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.471040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.471084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.471295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.471520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.471545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.265 qpair failed and we were unable to recover it. 00:21:43.265 [2024-04-24 19:52:24.471742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.265 [2024-04-24 19:52:24.471975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.472017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.472229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.472430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.472455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.472639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.472839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.472882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.473092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.473290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.473334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.473521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.473729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.473774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.473959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.474133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.474160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.474347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.474535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.474560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.474739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.474962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.475007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.475216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.475416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.475443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.475655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.475860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.475903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.476114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.476338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.476381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.476555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.476751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.476795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.477005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.477199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.477242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.477455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.477642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.477679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.477865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.478122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.478164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.478409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.478701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.478735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.478961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.479163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.479205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.479398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.479609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.479642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.479847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.480098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.480141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.480361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.480561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.480588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.480777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.480969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.481014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.481224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.481459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.481485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.481686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.481917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.481960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.482169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.482364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.482389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.482543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.482755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.482798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.483002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.483221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.483263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.483470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.483654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.483693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.483903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.484151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.484194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.484382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.484558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.266 [2024-04-24 19:52:24.484584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.266 qpair failed and we were unable to recover it. 00:21:43.266 [2024-04-24 19:52:24.484782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.487841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.487868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.488115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.488402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.488445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.488638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.488820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.488855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.489061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.489289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.489332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.489514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.489698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.489726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.489934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.490126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.490154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.490356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.490505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.490530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.490742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.490973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.491015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.491238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.491397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.491423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.491602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.491818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.491863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.492076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.492304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.492347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.492523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.492727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.492771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.492982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.493217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.493266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.493484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.493688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.493717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.493951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.494141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.494182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.494368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.494543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.494568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.494784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.495042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.495087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.495293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.495464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.495489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.495717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.495956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.495982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.496198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.496368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.496394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.496580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.496816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.496860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.497061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.497310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.497353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.497561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.497785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.497836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.498018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.498235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.498281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.498465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.498640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.498666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.498838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.499036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.499080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.499265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.499461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.499488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.499685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.499918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.499962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.267 [2024-04-24 19:52:24.500175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.500457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.267 [2024-04-24 19:52:24.500507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.267 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.500688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.500892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.500937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.501128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.501334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.501361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.501525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.501711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.501760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.501947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.502176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.502225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.502388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.502600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.502646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.502860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.503085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.503129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.503367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.503560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.503586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.503798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.504025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.504067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.504274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.504473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.504500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.504707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.504933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.504977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.505162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.505373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.505401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.505582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.505768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.505811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.506015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.506276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.506318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.506517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.506746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.506795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.507015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.507234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.507278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.507458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.507643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.507671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.507878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.508095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.508143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.508347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.508545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.508570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.508803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.509071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.509114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.509364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.509542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.509566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.509750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.509968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.510010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.510202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.510362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.510392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.510585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.510792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.510819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.511038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.511242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.511285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.511473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.511650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.511676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.511914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.512262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.512326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.512516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.512690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.512733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.268 [2024-04-24 19:52:24.512946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.513144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.268 [2024-04-24 19:52:24.513186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.268 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.513371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.513580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.513606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.513811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.514005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.514048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.514254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.514481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.514506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.514708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.514962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.515003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.515201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.515403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.515430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.515594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.515803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.515846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.516064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.516316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.516359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.516553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.516736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.516764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.516973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.517224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.517266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.517424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.517605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.517637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.517847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.518070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.518113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.518381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.518584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.518609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.518844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.519071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.519114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.519341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.519539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.519568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.519774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.520031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.520073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.520274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.520469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.520495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.520698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.520951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.520999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.521241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.521461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.521486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.521679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.521881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.521923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.522132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.522326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.522351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.522563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.522744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.522789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.522998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.523230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.523273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.523482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.523707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.523753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.523933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.524197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.524245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.524427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.524639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.524665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.524871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.525122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.525164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.525398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.525610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.525644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.269 qpair failed and we were unable to recover it. 00:21:43.269 [2024-04-24 19:52:24.525882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.526170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.269 [2024-04-24 19:52:24.526220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.526428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.526621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.526661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.526843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.527053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.527096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.527320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.527501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.527528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.527737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.527885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.527912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.528160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.528396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.528422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.528602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.528789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.528816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.529053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.529251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.529277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.529456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.529638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.529664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.529874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.530120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.530147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.530384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.530582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.530607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.530830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.531087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.531130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.531360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.531586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.531612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.531831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.532076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.532105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.532356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.532545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.532570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.532750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.532936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.532963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.533161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.533385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.533410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.533619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.533828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.533870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.534105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.534498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.534554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.534739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.534922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.534965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.535203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.535618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.535687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.535932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.536276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.536331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.536540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.536725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.536751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.536953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.537370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.537417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.270 [2024-04-24 19:52:24.537624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.537789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.270 [2024-04-24 19:52:24.537815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.270 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.538020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.538243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.538290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.538490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.538694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.538721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.538956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.539284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.539345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.539507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.539666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.539692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.539941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.540201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.540254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.540457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.540659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.540685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.540859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.541087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.541129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.541331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.541554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.541579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.541775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.541985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.542028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.542239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.542486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.542529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.542738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.542945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.542990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.543233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.543616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.543691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.543906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.544104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.544147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.544358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.544530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.544555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.544718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.544902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.544947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.545192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.545606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.545658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.545897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.546229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.546284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.546513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.546743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.546770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.547011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.547334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.547387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.547596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.547761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.547787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.547995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.548240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.548284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.548472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.548739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.548765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.548966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.549228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.549278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.549488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.549709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.549753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.549990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.550387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.550437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.550645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.550972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.551026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.551235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.551464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.551505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.271 qpair failed and we were unable to recover it. 00:21:43.271 [2024-04-24 19:52:24.551670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.551874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.271 [2024-04-24 19:52:24.551917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.552123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.552535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.552584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.552750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.552981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.553023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.553266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.553662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.553712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.553920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.554195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.554249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.554463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.554705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.554733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.554933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.555188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.555231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.555407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.555649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.555675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.555828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.556034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.556077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.556268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.556422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.556448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.556609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.556803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.556829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.557058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.557328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.557354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.557540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.557724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.557751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.557988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.558404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.558460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.558670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.558853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.558878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.559116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.559470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.559525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.559703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.559921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.559965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.560175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.560427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.560469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.560651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.560837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.560862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.561068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.561290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.561332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.561517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.561698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.561724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.561930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.562131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.562174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.562407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.562636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.562662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.562819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.563030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.563074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.563290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.563509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.563552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.563726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.563906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.563935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.564184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.564398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.564441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.564602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.564796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.564826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.565038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.565261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.565304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.272 qpair failed and we were unable to recover it. 00:21:43.272 [2024-04-24 19:52:24.565459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.272 [2024-04-24 19:52:24.565640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.565666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.565843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.566025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.566068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.566249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.566475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.566516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.566700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.566886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.566929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.567133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.567381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.567425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.567637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.567843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.567868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.568084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.568345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.568386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.568562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.568764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.568789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.569027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.569328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.569382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.569564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.569775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.569802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.570015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.570245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.570288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.570477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.570687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.570716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.570945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.571164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.571207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.571401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.571582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.571608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.571827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.572053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.572097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.572337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.572574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.572600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.572788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.573050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.573092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.573300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.573521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.573546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.573774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.573998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.574045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.574233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.574441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.574467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.574650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.574903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.574963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.575180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.575398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.575440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.575652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.575832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.575858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.576091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.576478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.576540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.576745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.576936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.576978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.577216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.577601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.577671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.577833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.578020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.578061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.578297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.578488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.578513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.273 [2024-04-24 19:52:24.578717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.579023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.273 [2024-04-24 19:52:24.579086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.273 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.579296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.579499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.579523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.579717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.579927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.579970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.580190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.580591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.580657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.580841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.581059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.581100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.581345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.581519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.581545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.581778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.582006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.582048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.582252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.582483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.582508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.582737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.582964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.583006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.583217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.583534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.583588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.583802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.584017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.584058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.584298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.584519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.584544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.584703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.584911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.584952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.585195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.585482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.585531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.585732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.585979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.586022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.586261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.586481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.586530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.586770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.587100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.587154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.587393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.587591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.587616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.587829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.588071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.588099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.588347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.588543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.588567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.588777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.588981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.589024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.589235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.589523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.589584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.589795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.590009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.590036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.590271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.590472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.590499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.274 qpair failed and we were unable to recover it. 00:21:43.274 [2024-04-24 19:52:24.590724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.274 [2024-04-24 19:52:24.591029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.591082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.591295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.591493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.591518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.591702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.591916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.591959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.592193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.592524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.592575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.592763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.592985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.593028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.593261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.593460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.593486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.593645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.593821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.593850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.594058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.594284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.594330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.594536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.594766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.594809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.594992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.595309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.595368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.595523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.595756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.595816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.596021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.596221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.596264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.596457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.596652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.596679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.596891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.597104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.597147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.597349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.597543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.597568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.597793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.597985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.598030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.598202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.598435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.598477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.598644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.598875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.598918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.599160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.599439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.599481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.599751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.600023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.600074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.600253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.600467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.600509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.600708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.600931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.600976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.601201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.601481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.601530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.601772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.601995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.602040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.602254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.602450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.602475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.602655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.602870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.602917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.603125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.603372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.603415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.603606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.603821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.603847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.275 [2024-04-24 19:52:24.604053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.604246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.275 [2024-04-24 19:52:24.604289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.275 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.604523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.604726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.604752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.604955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.605334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.605401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.605589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.605747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.605773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.605973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.606226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.606268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.606478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.606699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.606725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.606958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.607143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.607185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.607389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.607589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.607614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.607816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.608067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.608110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.608334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.608515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.608541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.608744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.608998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.609040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.609276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.609476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.609503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.609705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.609967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.610010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.610246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.610471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.610515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.610714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.611020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.611080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.611280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.611462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.611487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.611694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.611910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.611952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.612187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.612405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.612430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.612641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.612844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.612886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.613125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.613437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.613465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.613714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.613924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.613951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.614163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.614402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.614429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.614609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.614826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.614869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.615078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.615331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.615372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.615533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.615716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.615743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.615976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.616202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.616246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.616489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.616687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.616751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.616983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.617258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.617301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.617512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.617740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.617783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.276 qpair failed and we were unable to recover it. 00:21:43.276 [2024-04-24 19:52:24.617996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.276 [2024-04-24 19:52:24.618311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.618366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.618552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.618853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.618905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.619138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.619510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.619560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.619755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.619937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.619980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.620183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.620383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.620426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.620612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.620804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.620848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.621063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.621249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.621293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.621478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.621660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.621703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.621934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.622349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.622403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.622622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.622835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.622860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.623072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.623331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.623373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.623541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.623722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.623749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.623984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.624309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.624368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.624576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.624783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.624827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.625029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.625249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.625293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.625500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.625683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.625709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.625886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.626108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.626150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.626383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.626607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.626637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.626817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.627029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.627072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.627265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.627460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.627485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.627747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.628114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.628164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.628353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.628580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.628605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.628816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.629034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.629082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.629284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.629517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.629542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.629726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.629961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.629990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.630237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.630623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.630688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.630878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.631091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.631134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.631338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.631506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.631531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.277 [2024-04-24 19:52:24.631698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.631938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.277 [2024-04-24 19:52:24.631986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.277 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.632200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.632420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.632461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.632646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.632812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.632838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.633024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.633244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.633288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.633519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.633703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.633729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.633936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.634176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.634226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.634464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.634751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.634794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.635001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.635250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.635291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.635489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.635698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.635727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.635922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.636217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.636271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.636497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.636711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.636753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.636962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.637179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.637221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.637456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.637788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.637832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.638084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.638316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.638341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.638533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.638737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.638781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.638982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.639202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.639245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.639452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.639639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.639665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.639849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.640066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.640093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.640307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.640541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.640565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.640775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.641024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.641067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.641273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.641687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.641734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.641904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.642179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.642222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.642413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.642639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.642671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.642831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.643063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.643092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.643313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.643544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.643569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.643761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.643947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.643990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.644177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.644397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.644439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.644651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.644834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.644859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.645036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.645261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.278 [2024-04-24 19:52:24.645304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.278 qpair failed and we were unable to recover it. 00:21:43.278 [2024-04-24 19:52:24.645493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.645652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.645679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.645914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.646121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.646147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.646389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.646599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.646626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.646871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.647229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.647282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.647489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.647690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.647717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.647948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.648376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.648432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.648615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.648812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.648838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.649044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.649269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.649311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.649486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.649711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.649737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.649935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.650138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.650181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.650390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.650584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.650609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4440000b90 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.650819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.651064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.651096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.651323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.651582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.651650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.651887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.652139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.652173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.652580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.652837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.652863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.653076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.653291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.653318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.653548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.653728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.653753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.654019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.654375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.654426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.654621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.654830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.654855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.655042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.655183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.655208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.655591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.655822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.655847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.656036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.656244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.656273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.656623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.656877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.656901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.279 qpair failed and we were unable to recover it. 00:21:43.279 [2024-04-24 19:52:24.657120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.279 [2024-04-24 19:52:24.657392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.657441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.657657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.657934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.657959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.658195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.658477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.658523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.658725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.658907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.658932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.659128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.659301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.659328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.659524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.659729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.659754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.659962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.660244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.660300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.660532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.660786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.660812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.660969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.661381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.661432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.661657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.661825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.661851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.662060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.662299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.662350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.662556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.662742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.662768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.662964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.663142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.663167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.663393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.663600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.663625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.663799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.664052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.664102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.664500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.664709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.664734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.664969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.665311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.665361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.665585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.665794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.665818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.666038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.666357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.666421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.666633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.666844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.666869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.667098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.667328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.667355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.667600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.667849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.667874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.668081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.668296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.668324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.668554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.668790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.668816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.668997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.669174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.669202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.669415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.669618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.669651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.669852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.670146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.670196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.670596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.670839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.670865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.671072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.671390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.671446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.280 qpair failed and we were unable to recover it. 00:21:43.280 [2024-04-24 19:52:24.671693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.280 [2024-04-24 19:52:24.671880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.671922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.672111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.672353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.672380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.672556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.672770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.672796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.672950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.673145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.673172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.673375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.673563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.673588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.673821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.674193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.674246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.674486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.674748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.674777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.675003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.675340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.675392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.675608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.675831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.675858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.676084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.676434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.676482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.676702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.676900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.676925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.677088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.677295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.677323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.677522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.677749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.677782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.677972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.678177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.678202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.678383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.678568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.678592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.678813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.679054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.679103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.679290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.679525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.679552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.679790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.680154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.680213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.680418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.680590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.680618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.680846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.681079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.681107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.681330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.681523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.681551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.681789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.681993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.682021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.682199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.682379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.682403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.682647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.682993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.683042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.683263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.683514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.683574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.683792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.684042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.684092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.684292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.684526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.684553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.684791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.685133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.685185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.685390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.685571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.685596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.281 qpair failed and we were unable to recover it. 00:21:43.281 [2024-04-24 19:52:24.685782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.281 [2024-04-24 19:52:24.685966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.685991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.686233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.686407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.686434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.686640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.686837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.686862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.687069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.687246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.687270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.687436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.687593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.687619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.687812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.687969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.687994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.688228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.688472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.688496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.688706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.688895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.688923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.689150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.689484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.689539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.689764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.690089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.690145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.690347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.690548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.690575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.690751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.690932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.690956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.691145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.691369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.691397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.691599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.691779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.691817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.692041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.692217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.692242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.692450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.692652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.692682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.692909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.693259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.693310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.693481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.693697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.693725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.693924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.694228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.694281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.694449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.694666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.694694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.694901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.695246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.695299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.695502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.695740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.695768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.695969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.696143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.696170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.696374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.696555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.696580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.696771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.696987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.697012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.697196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.697377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.697436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.697680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.697875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.697899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.698114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.698312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.698339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.698546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.698778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.698807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.282 qpair failed and we were unable to recover it. 00:21:43.282 [2024-04-24 19:52:24.698991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.282 [2024-04-24 19:52:24.699264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.699314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.699537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.699738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.699766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.699966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.700171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.700198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.700429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.700632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.700661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.700860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.701076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.701104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.701325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.701532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.701563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.701775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.701980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.702007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.702232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.702438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.702463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.702649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.702848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.702876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.703071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.703270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.703298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.703474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.703687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.703712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.703874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.704100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.704127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.704339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.704571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.704599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.704797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.704953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.704978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.705133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.705334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.705359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.705543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.705727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.705752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.705912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.706126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.706150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.706368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.706568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.706592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.706784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.706965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.707064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.707287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.707514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.707542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.707736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.707891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.707933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.708142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.708324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.708349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.708569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.708743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.708771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.708966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.709144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.709172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.709397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.709585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.709610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.283 [2024-04-24 19:52:24.709793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.709998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.283 [2024-04-24 19:52:24.710025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.283 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.710234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.710430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.710458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.710680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.710886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.710945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.711152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.711339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.711366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.711563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.711746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.711775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.711978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.712213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.712240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.712474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.712677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.712705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.712919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.713220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.713279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.713492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.713665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.713693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.713921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.714132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.714157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.714384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.714608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.714641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.714839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.715054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.715101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.715285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.715563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.715612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.715825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.716181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.716233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.716446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.716646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.716674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.716903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.717154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.717211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.717427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.717639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.717667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.285 [2024-04-24 19:52:24.717835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.718026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.285 [2024-04-24 19:52:24.718051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.285 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.718257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.718538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.718596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.718801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.719002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.719029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.719230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.719433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.719482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.719704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.719995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.720044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.720252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.720431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.720473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.720700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.720873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.720914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.721085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.721346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.721394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.721621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.721831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.721858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.722062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.722249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.722299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.722496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.722734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.722759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.722981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.723187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.723214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.723417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.723646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.723674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.723878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.724083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.724110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.724333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.724540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.724572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.724800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.725110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.725164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.725364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.725566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.725593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.725787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.726011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.726039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.726237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.726439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.726463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.726685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.726864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.726891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.727093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.727297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.727321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.727502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.727705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.727733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.727914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.728138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.728166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.728389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.728606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.728654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.728910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.729136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.729168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.729362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.729586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.729614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.729786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.729969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.729994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.730152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.730334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.730359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.730517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.730696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.730722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.286 [2024-04-24 19:52:24.730977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.731289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.286 [2024-04-24 19:52:24.731350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.286 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.731649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.731887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.731912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.732145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.732476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.732524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.732739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.732918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.732946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.733149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.733303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.733328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.733551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.733723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.733751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.733957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.734336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.734383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.734586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.734812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.734840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.735038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.735210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.735238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.735441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.735662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.735687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.735870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.736019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.736043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.736246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.736435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.736462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.736677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.736856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.736881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.737087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.737366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.737422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.737618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.737826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.737851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.738055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.738393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.738453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.738674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.738892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.738935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.739167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.739502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.739562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.739778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.739991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.740019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.740247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.740607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.740681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.740890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.741104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.741131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.741325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.741534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.741558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.741733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.741896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.741921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.742124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.742379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.742436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.742644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.742873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.742898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.743057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.743272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.743325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.743552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.743729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.743758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.743956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.744243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.744306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.744514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.744751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.744779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.287 [2024-04-24 19:52:24.744987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.745224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.287 [2024-04-24 19:52:24.745251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.287 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.745474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.745677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.745705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.745905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.746064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.746088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.746265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.746472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.746499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.746668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.746934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.746985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.747212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.747412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.747439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.747666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.747960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.748020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.748224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.748428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.748456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.748644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.748830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.748855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.749036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.749400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.749454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.749680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.749884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.749910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.750120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.750409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.750470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.750672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.750877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.750904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.751130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.751414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.751468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.751678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.751863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.751887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.752067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.752426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.752479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.752700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.752992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.753056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.753284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.753612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.753675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.753917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.754154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.754202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.754406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.754613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.754646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.754877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.755063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.755087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.755289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.755605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.755679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.755861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.756072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.756138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.756353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.756557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.756584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.756761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.756939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.756963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.757190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.757493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.757548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.757780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.758038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.758093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.758317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.758542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.758570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.758756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.758928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.758955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.288 [2024-04-24 19:52:24.759175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.759414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.288 [2024-04-24 19:52:24.759466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.288 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.759692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.759919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.759947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.760143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.760461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.760515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.760785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.760952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.760977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.761162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.761387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.761414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.761586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.761798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.761826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.762063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.762398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.762449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.762650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.762877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.762904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.763098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.763274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.763303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.763509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.763685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.763715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.763925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.764069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.764095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.764296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.764518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.764545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.764758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.764949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.289 [2024-04-24 19:52:24.764974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.289 qpair failed and we were unable to recover it. 00:21:43.289 [2024-04-24 19:52:24.765156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.765348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.765375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.568 qpair failed and we were unable to recover it. 00:21:43.568 [2024-04-24 19:52:24.765583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.765765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.765794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.568 qpair failed and we were unable to recover it. 00:21:43.568 [2024-04-24 19:52:24.765963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.766170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.766198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.568 qpair failed and we were unable to recover it. 00:21:43.568 [2024-04-24 19:52:24.766400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.766621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.766656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.568 qpair failed and we were unable to recover it. 00:21:43.568 [2024-04-24 19:52:24.766823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.767029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.767056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.568 qpair failed and we were unable to recover it. 00:21:43.568 [2024-04-24 19:52:24.767231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.767468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.767495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.568 qpair failed and we were unable to recover it. 00:21:43.568 [2024-04-24 19:52:24.767702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.767878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.767906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.568 qpair failed and we were unable to recover it. 00:21:43.568 [2024-04-24 19:52:24.768101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.768298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.768328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.568 qpair failed and we were unable to recover it. 00:21:43.568 [2024-04-24 19:52:24.768560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.768713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.768738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.568 qpair failed and we were unable to recover it. 00:21:43.568 [2024-04-24 19:52:24.768937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.568 [2024-04-24 19:52:24.769184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.769239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.769476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.769726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.769752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.769960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.770272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.770323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.770540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.770766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.770795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.771020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.771250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.771275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.771455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.771654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.771682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.771882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.772105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.772133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.772333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.772538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.772562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.772709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.772894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.772936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.773157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.773353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.773381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.773583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.773820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.773848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.774053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.774278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.774305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.774499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.774704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.774733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.774932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.775125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.775152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.775380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.775582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.775610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.775828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.776060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.776087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.776287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.776613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.776692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.776926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.777112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.777141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.777329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.777498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.777527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.777739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.777903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.777927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.778092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.778270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.778294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.778472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.778659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.778687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.778882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.779077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.779105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.779299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.779495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.779519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.779728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.779889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.779914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.780064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.780281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.780309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.780513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.780697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.780723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.780878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.781058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.781085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.569 qpair failed and we were unable to recover it. 00:21:43.569 [2024-04-24 19:52:24.781289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.781517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.569 [2024-04-24 19:52:24.781542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.781736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.781888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.781913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.782102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.782272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.782315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.782530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.782687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.782712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.782888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.783172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.783223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.783455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.783655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.783681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.783865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.784069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.784097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.784302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.784529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.784558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.784728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.784951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.784976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.785161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.785346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.785371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.785587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.785800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.785825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.786034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.786233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.786258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.786408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.786641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.786670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.786897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.787237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.787295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.787703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.787915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.787940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.788127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.788289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.788314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.788508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.788728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.788754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.788913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.789108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.789133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.789288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.789469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.789510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.789706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.789856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.789881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.790063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.790268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.790293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.790472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.790651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.790677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.790835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.791028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.791055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.791286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.791458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.791487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.791710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.791867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.791892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.792102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.792408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.792457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.792659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.792876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.792904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.793097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.793320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.793344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.793521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.793685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.570 [2024-04-24 19:52:24.793712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.570 qpair failed and we were unable to recover it. 00:21:43.570 [2024-04-24 19:52:24.793919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.794177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.794201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.794362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.794523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.794547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.794752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.794966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.794994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.795219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.795505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.795555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.795802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.795986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.796010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.796166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.796342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.796383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.796574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.796783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.796811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.796994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.797217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.797244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.797419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.797596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.797623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.797803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.797991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.798015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.798243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.798443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.798471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.798650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.798807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.798837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.799044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.799235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.799262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.799469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.799649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.799675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.799860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.800043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.800070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.800236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.800449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.800474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.800669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.800873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.800900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.801093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.801286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.801310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.801505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.801687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.801713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.801936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.802141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.802167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.802351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.802529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.802554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.802733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.802965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.802993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.803215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.803394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.803421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.803620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.803810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.803836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.804023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.804196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.804220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.804375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.804610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.804652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.804838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.805020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.805045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.805253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.805450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.571 [2024-04-24 19:52:24.805491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.571 qpair failed and we were unable to recover it. 00:21:43.571 [2024-04-24 19:52:24.805655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.805861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.805886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.806065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.806216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.806240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.806474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.806636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.806661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.806855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.807052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.807080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.807266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.807459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.807484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.807724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.807930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.807955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.808159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.808357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.808384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.808567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.808760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.808789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.808994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.809191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.809216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.809378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.809534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.809559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.809741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.809993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.810061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.810290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.810445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.810470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.810705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.810903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.810931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.811128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.811304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.811328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.811562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.811808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.811833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.811980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.812127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.812152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.812357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.812540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.812570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.812795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.812948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.812973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.813186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.813474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.813526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.813750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.813992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.814047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.814247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.814454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.814479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.814684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.814860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.814885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.815090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.815393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.815436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.815711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.815917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.815942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.816177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.816464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.816518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.816726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.816910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.816935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.817119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.817297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.817322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.817527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.817712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.572 [2024-04-24 19:52:24.817739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.572 qpair failed and we were unable to recover it. 00:21:43.572 [2024-04-24 19:52:24.817946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.818099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.818124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.818302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.818500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.818528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.818746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.818955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.818980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.819169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.819423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.819480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.819749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.819941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.819966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.820199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.820377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.820402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.820593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.820807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.820837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.820988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.821168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.821192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.821393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.821587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.821614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.821825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.821982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.822006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.822164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.822321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.822347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.822531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.822745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.822772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.822934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.823137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.823165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.823357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.823557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.823584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.823800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.823965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.823992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.824159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.824437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.824489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.824724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.824963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.825017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.825248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.825426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.825451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.825640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.825850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.825877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.826086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.826280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.826305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.826510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.826685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.826714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.826912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.827095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.573 [2024-04-24 19:52:24.827120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.573 qpair failed and we were unable to recover it. 00:21:43.573 [2024-04-24 19:52:24.827272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.827420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.827447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.827621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.827824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.827850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.828009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.828161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.828186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.828392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.828590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.828617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.828835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.829018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.829043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.829229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.829435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.829462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.829689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.829918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.829945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.830144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.830380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.830405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.830555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.830774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.830799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.831025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.831199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.831227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.831437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.831588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.831612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.831804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.832000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.832025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.832200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.832405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.832429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.832610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.832798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.832823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.833006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.833159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.833185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.833374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.833523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.833548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.833765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.833976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.834001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.834211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.834362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.834387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.834593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.834814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.834840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.835044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.835224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.835250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.835430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.835585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.835610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.835808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.835987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.836012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.836182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.836357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.836382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.836614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.836803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.836827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.837025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.837338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.837391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.837597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.837783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.837809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.837966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.838139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.838164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.838340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.838522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.838546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.574 qpair failed and we were unable to recover it. 00:21:43.574 [2024-04-24 19:52:24.838702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.574 [2024-04-24 19:52:24.838857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.838881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.839086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.839409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.839457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.839686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.839888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.839913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.840113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.840269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.840294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.840573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.840803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.840829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.840991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.841155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.841197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.841375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.841566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.841594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.841778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.841945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.841970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.842139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.842351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.842376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.842528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.842683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.842709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.842932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.843109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.843134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.843288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.843473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.843516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.843696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.843865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.843893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.844103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.844276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.844301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.844482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.844662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.844687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.844840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.845080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.845108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.845316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.845543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.845570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.845803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.846020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.846076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.846266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.846483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.846510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.846692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.846848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.846873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.847054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.847233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.847258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.847482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.847642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.847668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.847818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.847972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.847998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.848148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.848322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.848346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.848522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.848721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.848749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.848925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.849107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.849131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.849290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.849473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.849498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.849711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.849862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.849888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.850074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.850252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.575 [2024-04-24 19:52:24.850277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.575 qpair failed and we were unable to recover it. 00:21:43.575 [2024-04-24 19:52:24.850436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.850613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.850647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.850826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.850987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.851014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.851214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.851363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.851389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.851640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.851800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.851843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.852063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.852216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.852241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.852419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.852648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.852691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.852848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.853029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.853056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.853224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.853465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.853522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.853705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.853865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.853891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.854058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.854264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.854288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.854493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.854679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.854706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.854861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.855017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.855057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.855263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.855435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.855464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.855642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.855834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.855858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.856077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.856258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.856283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.856479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.856682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.856711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.856890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.857094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.857119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.857295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.857491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.857518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.857746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.857972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.858026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.858225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.858371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.858395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.858576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.858730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.858755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.858959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.859135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.859160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.859342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.859547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.859575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.859786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.860019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.860075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.860298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.860469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.860511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.860720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.860903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.860928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.861110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.861271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.861296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.861477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.861639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.861665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.576 qpair failed and we were unable to recover it. 00:21:43.576 [2024-04-24 19:52:24.861846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.576 [2024-04-24 19:52:24.862007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.862032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.862179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.862428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.862479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.862702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.862856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.862881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.863043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.863227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.863252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.863413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.863621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.863655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.863849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.864031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.864055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.864228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.864377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.864419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.864626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.864866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.864894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.865094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.865380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.865405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.865613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.865785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.865810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.865994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.866172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.866197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.866401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.866617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.866655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.866839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.867050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.867075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.867231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.867405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.867433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.867656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.867880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.867907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.868112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.868315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.868340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.868499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.868679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.868705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.868864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.869046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.869072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.869280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.869462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.869487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.869726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.869926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.869954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.870120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.870409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.870467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.870681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.870844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.870868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.871053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.871254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.871314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.871494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.871727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.871753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.871931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.872163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.872218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.872422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.872625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.872679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.872865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.873052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.873080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.873276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.873509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.577 [2024-04-24 19:52:24.873534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.577 qpair failed and we were unable to recover it. 00:21:43.577 [2024-04-24 19:52:24.873714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.873867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.873892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.874076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.874251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.874276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.874432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.874607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.874638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.874847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.875028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.875052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.875255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.875578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.875638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.875869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.876057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.876082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.876285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.876543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.876590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.876805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.876985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.877010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.877194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.877374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.877399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.877575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.877800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.877829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.878010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.878175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.878204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.878431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.878670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.878695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.878840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.879016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.879041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.879272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.879498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.879523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.879681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.879865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.879890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.880098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.880304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.880331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.880507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.880685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.880713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.880882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.881072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.881100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.881331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.881541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.881566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.881745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.881896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.881921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.578 qpair failed and we were unable to recover it. 00:21:43.578 [2024-04-24 19:52:24.882132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.578 [2024-04-24 19:52:24.882319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.882378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.882559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.882755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.882781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.882959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.883160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.883187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.883380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.883573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.883601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.883781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.883981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.884009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.884202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.884381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.884407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.884585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.884746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.884786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.885016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.885201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.885226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.885413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.885579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.885606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.885813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.886092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.886144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.886345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.886574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.886601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.886811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.886967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.886992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.887141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.887302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.887327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.887504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.887720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.887745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.887900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.888057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.888087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.888272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.888498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.888523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.888704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.888913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.888941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.889144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.889330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.889372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.889556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.889737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.889763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.889943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.890169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.890196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.890402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.890586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.890610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.890770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.890973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.891001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.891183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.891337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.891379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.891610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.891794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.891819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.891974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.892156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.892184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.892339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.892511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.892536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.892721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.892942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.893002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.579 [2024-04-24 19:52:24.893187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.893346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.579 [2024-04-24 19:52:24.893371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.579 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.893532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.893714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.893739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.893939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.894164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.894192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.894419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.894596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.894621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.894834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.895014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.895039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.895195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.895369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.895394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.895603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.895814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.895839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.896018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.896230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.896254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.896545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.896704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.896729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.896911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.897132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.897180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.897380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.897586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.897614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.897820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.898034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.898083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.898306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.898502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.898526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.898728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.898922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.898949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.899120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.899323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.899350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.899573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.899752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.899790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.900000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.900160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.900185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.900343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.900500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.900524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.900738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.900937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.900978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.901182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.901363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.901389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.901596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.901801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.901830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.902058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.902383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.902433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.902644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.902800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.902825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.903010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.903171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.903195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.903380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.903619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.903652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.903815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.903968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.903993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.580 qpair failed and we were unable to recover it. 00:21:43.580 [2024-04-24 19:52:24.904144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.580 [2024-04-24 19:52:24.904353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.904378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.904547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.904728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.904754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.904934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.905247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.905304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.905531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.905740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.905766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.905980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.906231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.906299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.906503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.906715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.906741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.906925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.907124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.907152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.907403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.907597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.907625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.907836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.908019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.908044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.908234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.908414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.908439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.908597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.908770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.908796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.908971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.909174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.909199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.909359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.909542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.909572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.909798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.909990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.910015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.910173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.910377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.910418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.910667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.910874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.910899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.911055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.911237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.911263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.911486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.911737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.911762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.911972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.912157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.912182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.912365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.912546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.912571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.912753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.912920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.912948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.913133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.913313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.913340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.913522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.913716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.913748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.913909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.914089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.914113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.914319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.914530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.914555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.914716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.914908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.914933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.915112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.915266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.915291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.915455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.915674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.915700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.581 qpair failed and we were unable to recover it. 00:21:43.581 [2024-04-24 19:52:24.915882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.581 [2024-04-24 19:52:24.916057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.916081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.916262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.916435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.916477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.916687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.916844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.916869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.917046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.917265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.917290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.917441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.917615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.917648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.917872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.918021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.918045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.918324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.918555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.918580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.918746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.918929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.918954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.919183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.919336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.919362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.919600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.919812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.919838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.920068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.920277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.920302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.920529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.920731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.920756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.920960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.921163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.921188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.921370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.921579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.921604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.921802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.921983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.922008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.922166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.922350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.922375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.922557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.922756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.922782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.922942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.923119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.923144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.923355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.923531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.923560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.923794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.924004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.924028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.924215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.924464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.924512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.924719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.924904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.924929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.925078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.925274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.925299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.925473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.925668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.925696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.925914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.926073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.926098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.926289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.926472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.926497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.582 qpair failed and we were unable to recover it. 00:21:43.582 [2024-04-24 19:52:24.926653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.582 [2024-04-24 19:52:24.926829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.926856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.927044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.927243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.927268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.927480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.927711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.927737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.927957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.928183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.928211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.928432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.928589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.928614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.928804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.929003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.929028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.929245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.929491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.929516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.929745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.929945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.929970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.930149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.930327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.930352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.930559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.930784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.930810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.930970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.931150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.931174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.931382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.931589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.931616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.931816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.931992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.932017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.932200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.932380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.932405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.932609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.932781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.932809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.933039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.933248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.933273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.933430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.933591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.933616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.933811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.933995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.934020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.934202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.934394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.934419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.934580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.934762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.934793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.934945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.935124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.935150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.935332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.935511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.935536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.583 qpair failed and we were unable to recover it. 00:21:43.583 [2024-04-24 19:52:24.935726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.583 [2024-04-24 19:52:24.935906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.935931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.936109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.936264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.936289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.936442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.936650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.936676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.936824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.937005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.937030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.937209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.937422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.937474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.937689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.937869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.937895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.938099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.938279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.938305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.938456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.938641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.938684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.938923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.939086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.939111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.939292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.939452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.939477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.939657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.939835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.939877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.940059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.940269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.940294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.940487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.940643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.940669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.940830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.940986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.941011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.941218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.941371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.941395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.941593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.941770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.941796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.941958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.942112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.942139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.942296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.942502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.942527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.942687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.942838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.942863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.943036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.943211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.943237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.943397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.943582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.943607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.943771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.943976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.944019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.944223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.944377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.944401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.944599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.944806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.944831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.944984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.945137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.945161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.945338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.945516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.945541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.945728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.945904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.945930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.946138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.946353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.946378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.584 qpair failed and we were unable to recover it. 00:21:43.584 [2024-04-24 19:52:24.946555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.584 [2024-04-24 19:52:24.946713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.946739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.946939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.947156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.947182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.947390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.947548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.947572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.947768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.947947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.947971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.948134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.948308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.948332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.948512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.948722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.948748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.948926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.949188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.949241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.949453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.949637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.949663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.949843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.950052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.950076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.950234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.950407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.950431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.950614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.950845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.950870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.951022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.951169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.951194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.951378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.951552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.951579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.951798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.951976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.952001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.952164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.952319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.952344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.952524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.952752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.952780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.952992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.953209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.953237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.953449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.953634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.953659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.953811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.953990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.954014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.954191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.954394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.954423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.954623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.954863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.954892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.955108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.955290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.955315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.955497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.955686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.955716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.955917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.956075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.956101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.956282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.956430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.956455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.956604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.956795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.956820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.585 [2024-04-24 19:52:24.956993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.957201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.585 [2024-04-24 19:52:24.957226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.585 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.957430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.957642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.957685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.957840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.958032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.958056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.958266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.958469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.958496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.958677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.958858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.958888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.959093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.959441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.959465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.959655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.959844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.959868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.960077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.960358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.960408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.960609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.960825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.960850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.961001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.961160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.961185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.961400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.961623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.961658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.961890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.962071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.962112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.962313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.962530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.962583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.962823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.962984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.963009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.963161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.963342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.963367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.963590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.963767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.963793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.963978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.964183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.964208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.964413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.964638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.964664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.964868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.965061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.965086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.965299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.965531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.965558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.965792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.965960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.965988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.966169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.966358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.966382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.966539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.966742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.966767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.966948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.967130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.967155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.967336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.967525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.967552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.967759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.967964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.967992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.968223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.968406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.968431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.968659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.968843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.968870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.969049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.969231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.969256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.586 [2024-04-24 19:52:24.969454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.969686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.586 [2024-04-24 19:52:24.969714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.586 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.969937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.970134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.970159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.970364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.970529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.970557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.970770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.970983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.971009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.971187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.971349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.971373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.971523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.971693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.971723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.971901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.972137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.972163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.972344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.972491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.972517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.972731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.972938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.972966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.973140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.973334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.973361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.973531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.973714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.973740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.973951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.974193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.974218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.974404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.974586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.974611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.974811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.975013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.975038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.975244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.975452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.975492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.975704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.975881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.975906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.976119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.976352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.976383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.976542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.976778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.976807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.977008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.977209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.977236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.977409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.977620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.977653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.977868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.978051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.978076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.978286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.978458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.978485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.978662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.978814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.978839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.979029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.979200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.979225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.979407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.979566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.587 [2024-04-24 19:52:24.979590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.587 qpair failed and we were unable to recover it. 00:21:43.587 [2024-04-24 19:52:24.979747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.979927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.979952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.980172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.980359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.980389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.980572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.980749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.980774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.980950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.981126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.981151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.981325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.981535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.981563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.981756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.981969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.982020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.982194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.982486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.982538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.982740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.982944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.982969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.983177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.983492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.983542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.983746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.983920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.983948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.984157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.984336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.984363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.984585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.984790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.984818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.985031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.985287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.985338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.985562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.985780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.985809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.986017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.986199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.986224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.986443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.986644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.986672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.986871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.987093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.987121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.987344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.987561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.987589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.987786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.988021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.988049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.988287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.988495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.988523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.988723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.988920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.988949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.989129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.989328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.989357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.989569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.989785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.989814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.990047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.990300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.990354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.990546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.990744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.990774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.990948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.991098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.991142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.991307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.991544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.991571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.991757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.992029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.588 [2024-04-24 19:52:24.992081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.588 qpair failed and we were unable to recover it. 00:21:43.588 [2024-04-24 19:52:24.992265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.992426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.992452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.992644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.992856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.992886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.993113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.993337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.993366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.993598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.993826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.993855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.994033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.994214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.994243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.994448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.994666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.994695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.994932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.995250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.995299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.995523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.995738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.995768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.995953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.996227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.996276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.996481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.996689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.996715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.996927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.997219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.997268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.997475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.997755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.997784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.997985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.998257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.998306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.998582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.998786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.998815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.999041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.999292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.999320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.999546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.999730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:24.999758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:24.999955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.000174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.000222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.000448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.000645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.000676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.000880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.001063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.001088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.001268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.001493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.001549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.001747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.001981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.002038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.002272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.002563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.002612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.002852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.003109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.003137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.003345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.003571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.003599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.003854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.004133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.004183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.004387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.004589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.004617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.004829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.005122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.005178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.005382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.005583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.005610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.589 qpair failed and we were unable to recover it. 00:21:43.589 [2024-04-24 19:52:25.005850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.006053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.589 [2024-04-24 19:52:25.006078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.006271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.006566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.006622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.006843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.007137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.007199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.007433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.007625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.007664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.007841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.008014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.008044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.008228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.008529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.008590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.008801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.008978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.009006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.009236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.009603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.009680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.009887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.010094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.010119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.010343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.010515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.010542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.010773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.010929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.010954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.011168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.011353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.011381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.011579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.011792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.011819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.012006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.012218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.012246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.012467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.012668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.012696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.012923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.013223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.013272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.013496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.013673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.013702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.013938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.014214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.014269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.014468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.014693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.014722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.014946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.015206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.015256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.015478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.015639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.015665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.015845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.016081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.016109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.016308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.016608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.016665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.016862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.017163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.017226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.017451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.017653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.017682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.017888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.018093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.018121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.018315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.018512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.018540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.018770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.018936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.018964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.019176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.019498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.590 [2024-04-24 19:52:25.019548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.590 qpair failed and we were unable to recover it. 00:21:43.590 [2024-04-24 19:52:25.019754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.020031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.020086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.020289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.020487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.020514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.020695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.020870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.020899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.021108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.021313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.021337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.021567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.021743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.021771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.021993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.022190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.022218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.022437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.022667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.022696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.022902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.023138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.023163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.023363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.023578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.023606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.023796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.023994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.024023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.024231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.024553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.024606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.024811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.025023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.025048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.025254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.025459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.025486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.025769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.025976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.026001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.026162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.026339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.026367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.026591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.026786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.026811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.026986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.027180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.027207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.027428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.027641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.027677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.027892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.028045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.028084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.028239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.028401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.028428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.028601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.028820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.028846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.029052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.029212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.029254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.029480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.029730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.029758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.029974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.030177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.030202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.030406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.030566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.030606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.030816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.031003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.031028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.031187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.031372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.031396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.031581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.031752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.031780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.032000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.032207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.591 [2024-04-24 19:52:25.032235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.591 qpair failed and we were unable to recover it. 00:21:43.591 [2024-04-24 19:52:25.032424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.032573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.032598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.032771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.032974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.033002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.033228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.033470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.033523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.033735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.033919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.033944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.034125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.034281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.034306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.034542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.034738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.034764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.035025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.035205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.035230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.035407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.035553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.035578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.035728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.035915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.035958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.036165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.036367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.036391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.036636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.036859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.036884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.037038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.037246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.037271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.037473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.037691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.037716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.037873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.038078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.038103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.038319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.038491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.038518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.038717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.038896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.038927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.039114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.039299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.039338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.039547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.039750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.039775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.039951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.040133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.040158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.040365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.040559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.040587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.040792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.040972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.040997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.041206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.041412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.041451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.041649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.041839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.041864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.592 [2024-04-24 19:52:25.042021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.042213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.592 [2024-04-24 19:52:25.042238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.592 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.042443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.042645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.042682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.042911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.043094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.043119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.043325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.043501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.043528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.043744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.043931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.043956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.044161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.044314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.044340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.044527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.044750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.044777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.044963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.045157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.045182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.045360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.045531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.045555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.045702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.045850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.045875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.046102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.046276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.046301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.046500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.046681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.046707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.046890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.047235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.047310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.047493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.047675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.047718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.047895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.048052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.048093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.048327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.048501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.048526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.048732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.048900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.048929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.049150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.049308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.049337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.049494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.049744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.049771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.049972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.050143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.050168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.050333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.050508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.050533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.050752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.050944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.050972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.051174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.051346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.051373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.051567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.051745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.051771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.051983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.052181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.052208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.052405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.052613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.052645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.052825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.053008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.053033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.053248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.053450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.053482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.053682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.053881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.053905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.054087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.054265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.054306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.054495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.054704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.054730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.593 qpair failed and we were unable to recover it. 00:21:43.593 [2024-04-24 19:52:25.054950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.593 [2024-04-24 19:52:25.055219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.055243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.055467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.055746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.055772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.055981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.056206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.056234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.056434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.056610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.056644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.056872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.057056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.057081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.057264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.057449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.057474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.057637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.057799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.057823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.058012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.058167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.058192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.058338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.058547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.058572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.058758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.058937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.058963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.059147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.059330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.059355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.059564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.059778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.059804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.060003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.060170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.060197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.060431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.060607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.060639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.060840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.061071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.061096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.061284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.061438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.594 [2024-04-24 19:52:25.061462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.594 qpair failed and we were unable to recover it. 00:21:43.594 [2024-04-24 19:52:25.061643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.061825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.061851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.062059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.062255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.062279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.062461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.062681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.062709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.062902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.063111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.063136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.063302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.063458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.063483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.063684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.063876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.063901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.064123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.064271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.064296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.064489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.064664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.064690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.064866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.065094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.065119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.065268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.065451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.065475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.065684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.065934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.065959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.066120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.066301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.066325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.066508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.066703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.066729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.066913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.067093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.067117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.067323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.067579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.067603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.067789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.067989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.068017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.068229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.068430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.068457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.068625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.068825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.068853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.069084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.069307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.069372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.069603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.069817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.069846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.070075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.070363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.070418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.070648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.070846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.882 [2024-04-24 19:52:25.070871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.882 qpair failed and we were unable to recover it. 00:21:43.882 [2024-04-24 19:52:25.071072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.071359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.071418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.071640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.071838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.071865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.072058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.072233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.072257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.072429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.072624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.072658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.072880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.073109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.073134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.073295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.073532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.073559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.073760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.074041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.074090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.074271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.074507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.074556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.074794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.075063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.075110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.075338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.075540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.075573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.075746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.076039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.076095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.076296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.076478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.076503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.076664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.076843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.076868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.077049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.077279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.077330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.077566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.077747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.077775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.077969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.078174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.078199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.078374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.078552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.078581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.078800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.078980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.079005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.079216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.079387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.079414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.079640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.079844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.079869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.080054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.080204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.080228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.080432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.080642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.080671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.080869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.081078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.081105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.081332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.081610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.081671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.081897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.082116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.082144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.082336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.082596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.082655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.082871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.083054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.083078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.083307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.083494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.083553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.883 [2024-04-24 19:52:25.083757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.083956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.883 [2024-04-24 19:52:25.083981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.883 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.084188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.084395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.084422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.084655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.084856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.084882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.085093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.085414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.085464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.085693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.085873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.085900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.086107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.086327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.086354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.086551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.086762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.086790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.087026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.087244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.087296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.087493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.087705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.087734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.087968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.088246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.088304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.088508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.088717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.088743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.088921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.089147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.089174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.089404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.089605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.089639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.089849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.090077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.090105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.090311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.090506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.090533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.090742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.090987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.091043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.091273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.091495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.091553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.091733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.091955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.091982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.092196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.092419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.092447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.092657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.092843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.092869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.093027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.093291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.093340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.093572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.093806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.093834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.094046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.094252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.094280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.094508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.094710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.094739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.094961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.095265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.095318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.095541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.095745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.095774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.095969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.096183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.096211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.096409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.096604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.096639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.096843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.097073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.097100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.884 [2024-04-24 19:52:25.097274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.097598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.884 [2024-04-24 19:52:25.097654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.884 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.097859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.098036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.098064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.098250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.098441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.098468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.098696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.098864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.098897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.099086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.099287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.099314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.099505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.099734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.099796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.099973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.100171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.100199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.100363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.100569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.100596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.100793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.100956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.100981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.101164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.101369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.101393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.101608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.101789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.101818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.102020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.102226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.102250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.102450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.102644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.102673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.102851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.103050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.103077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.103282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.103587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.103647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.103848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.104148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.104195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.104409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.104599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.104626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.104861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.105068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.105096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.105330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.105510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.105534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.105737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.105958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.105986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.106214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.106499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.106556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.106768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.106927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.106970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.107172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.107400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.107451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.107650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.107899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.107928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.108138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.108308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.108335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.108509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.108736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.108764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.108935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.109148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.109195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.109393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.109626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.109666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.109892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.110190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.110246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.885 qpair failed and we were unable to recover it. 00:21:43.885 [2024-04-24 19:52:25.110445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.110591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.885 [2024-04-24 19:52:25.110616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.110797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.111089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.111153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.111323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.111544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.111569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.111757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.111923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.111951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.112171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.112455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.112508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.112739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.113029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.113088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.113286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.113473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.113498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.113675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.113889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.113916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.114096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.114257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.114284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.114481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.114673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.114701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.114871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.115183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.115237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.115468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.115667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.115695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.115891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.116071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.116095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.116312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.116514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.116541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.116718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.117013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.117063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.117266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.117455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.117480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.117709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.117901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.117926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.118076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.118291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.118318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.118524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.118677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.118722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.118923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.119226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.119277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.119496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.119694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.119754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.119990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.120278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.120332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.120530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.120843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.120898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.886 qpair failed and we were unable to recover it. 00:21:43.886 [2024-04-24 19:52:25.121064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.886 [2024-04-24 19:52:25.121362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.121414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.121647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.121828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.121852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.122036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.122338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.122393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.122620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.122852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.122880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.123064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.123335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.123363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.123535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.123726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.123755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.123936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.124136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.124162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.124381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.124616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.124651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.124887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.125063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.125090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.125317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.125626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.125688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.125875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.126063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.126118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.126289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.126461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.126489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.126677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.126952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.127004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.127258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.127566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.127640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.127851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.128178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.128226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.128450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.128677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.128705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.128879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.129066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.129090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.129306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.129483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.129511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.129733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.130049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.130103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.130336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.130495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.130522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.130724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.130899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.130927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.131103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.131359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.131409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.131651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.131814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.131838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.132026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.132211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.132235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.132444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.132680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.132709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.132939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.133142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.133169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.133366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.133563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.133591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.133821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.134065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.134124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.134354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.134554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.887 [2024-04-24 19:52:25.134582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.887 qpair failed and we were unable to recover it. 00:21:43.887 [2024-04-24 19:52:25.134789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.134990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.135019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.135221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.135419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.135446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.135653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.135858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.135885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.136112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.136362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.136418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.136625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.136866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.136894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.137076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.137402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.137456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.137683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.137891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.137916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.138142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.138345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.138370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.138612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.138825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.138853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.139056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.139362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.139422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.139653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.139832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.139862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.140058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.140241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.140266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.140468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.140668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.140697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.140866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.141102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.141159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.141428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.141671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.141700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.141872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.142138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.142188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.142411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.142587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.142616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.142828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.143014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.143039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.143195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.143462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.143511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.143746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.143919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.143947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.144145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.144343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.144395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.144598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.144835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.144860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.145085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.145391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.145450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.145660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.145866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.145893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.146091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.146291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.146319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.146544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.146738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.146767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.146986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.147166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.147193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.147390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.147598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.147624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.147839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.148062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.888 [2024-04-24 19:52:25.148087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.888 qpair failed and we were unable to recover it. 00:21:43.888 [2024-04-24 19:52:25.148266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.148496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.148554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.148760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.148962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.148989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.149189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.149418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.149442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.149619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.149857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.149885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.150095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.150425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.150472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.150673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.150861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.150890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.151098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.151330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.151358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.151554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.151730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.151759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.151958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.152250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.152311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.152540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.152749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.152778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.152999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.153253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.153304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.153502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.153723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.153751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.153924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.154210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.154261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.154458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.154674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.154702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.154933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.155256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.155318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.155551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.155763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.155792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.155977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.156181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.156208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.156409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.156611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.156654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.156868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.157079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.157129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.157339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.157537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.157564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.157772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.157951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.157979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.158217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.158366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.158390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.158580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.158761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.158786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.158973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.159251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.159304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.159498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.159738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.159767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.159940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.160273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.160338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.160576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.160742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.160771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.160980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.161323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.161370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.161575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.161752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.889 [2024-04-24 19:52:25.161781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.889 qpair failed and we were unable to recover it. 00:21:43.889 [2024-04-24 19:52:25.161986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.162295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.162352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.162574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.162760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.162789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.163018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.163242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.163270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.163457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.163690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.163715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.163923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.164105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.164130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.164315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.164521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.164548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.164727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.164949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.164977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.165211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.165511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.165576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.165811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.166075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.166134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.166340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.166575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.166599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.166760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.166934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.166961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.167122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.167310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.167338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.167568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.167735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.167762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.167916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.168153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.168203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.168409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.168606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.168640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.168834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.169034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.169059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.169212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.169529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.169580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.169793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.170029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.170056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.170254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.170642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.170691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.170891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.171157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.171210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.171412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.171641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.171670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.171859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.172039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.172081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.172306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.172492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.172517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.172723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.172970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.173018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.173251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.173430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.173455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.173611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.173784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.173809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.174022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.174386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.174413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.174647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.174851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.174884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.175083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.175293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.890 [2024-04-24 19:52:25.175318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.890 qpair failed and we were unable to recover it. 00:21:43.890 [2024-04-24 19:52:25.175504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.175711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.175737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.175890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.176102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.176137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.176348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.176524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.176551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.176791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.176981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.177005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.177227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.177455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.177480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.177662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.177896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.177924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.178125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.178412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.178460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.178690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.178891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.178919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.179103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.179306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.179331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.179501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.179722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.179751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.179977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.180300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.180348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.180554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.180784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.180812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.181038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.181293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.181348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.181528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.181691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.181719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.181921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.182249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.182299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.182523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.182718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.182746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.182944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.183202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.183252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.183462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.183668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.183693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.183851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.184060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.184085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.184344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.184518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.184546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.184745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.185034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.185094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.185294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.185471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.185498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.185728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.185933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.185971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.891 [2024-04-24 19:52:25.186191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.186435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.891 [2024-04-24 19:52:25.186483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.891 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.186706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.186931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.187002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.187208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.187429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.187454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.187640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.187842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.187870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.188057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.188251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.188279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.188502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.188706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.188734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.188908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.189046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.189087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.189287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.189493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.189522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.189714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.189889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.189917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.190123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.190425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.190483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.190717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.190984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.191036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.191274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.191473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.191500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.191681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.191907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.191935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.192135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.192429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.192477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.192653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.192853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.192882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.193057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.193293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.193344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.193568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.193739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.193768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.193971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.194139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.194166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.194375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.194557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.194582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.194748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.194945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.194973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.195194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.195464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.195510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.195737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.195941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.195969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.196163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.196520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.196574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.196777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.196952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.196979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.197167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.197429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.197477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.197655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.197816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.197844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.198044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.198337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.198403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.198638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.198876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.198900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.199049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.199251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.199276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.892 [2024-04-24 19:52:25.199475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.199643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.892 [2024-04-24 19:52:25.199673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.892 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.199893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.200107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.200159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.200357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.200557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.200584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.200780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.200982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.201010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.201215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.201393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.201420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.201617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.201846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.201874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.202102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.202302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.202331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.202529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.202758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.202786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.202990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.203148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.203171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.203352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.203579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.203602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.203792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.203970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.203993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.204173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.204328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.204351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.204539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.204756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.204782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.204943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.205096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.205121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.205302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.205457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.205499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.205676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.205860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.205886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.206071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.206276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.206303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.206525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.206754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.206780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.206998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.207188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.207215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.207413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.207594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.207619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.207784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.207965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.207990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.208165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.208347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.208372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.208554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.208751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.208780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.208978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.209158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.209183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.209365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.209522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.209547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.209707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.209907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.209936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.210133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.210333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.210357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.210538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.210705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.210734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.210920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.211122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.211164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.893 [2024-04-24 19:52:25.211324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.211484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.893 [2024-04-24 19:52:25.211512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.893 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.212524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.212769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.212797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.212959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.213123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.213149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.213301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.213477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.213507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.213713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.213894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.213919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.214097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.214277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.214303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.214459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.214641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.214667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.214863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.215046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.215071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.215273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.215520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.215577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.215799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.216019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.216044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.216200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.216353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.216378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.216573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.216768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.216794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.217030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.217232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.217260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.217459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.217689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.217715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.217908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.218114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.218155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.218325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.218503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.218527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.218697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.218874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.218898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.219078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.219271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.219295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.219481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.219652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.219680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.219859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.220037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.220065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.220246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.220479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.220506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.220724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.220879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.220904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.221067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.221248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.221273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.221453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.221636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.221662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.221851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.222025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.222051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.222206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.222367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.222392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.222549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.222730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.222756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.222913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.223160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.223185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.223374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.223564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.223589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.894 [2024-04-24 19:52:25.223784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.223938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.894 [2024-04-24 19:52:25.223968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.894 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.224151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.224312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.224337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.224515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.224705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.224731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.224937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.225153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.225178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.225369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.225538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.225563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.225721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.225899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.225925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.226098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.226299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.226325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.226506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.226692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.226718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.226867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.227053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.227077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.227233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.227387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.227412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.227573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.227724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.227751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.227917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.228129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.228154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.228323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.228498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.228523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.228710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.228882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.228910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.229069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.229243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.229268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.229495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.229675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.229702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.229880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.230075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.230100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.230306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.230460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.230484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.230695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.230871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.230912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.231111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.231262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.231286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.231450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.231602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.231634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.231821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.232015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.232043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.232246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.232449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.232489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.232692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.232845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.232889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.233066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.233219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.233244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.233425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.233599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.233623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.233812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.233974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.234001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.234180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.234367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.234391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.895 qpair failed and we were unable to recover it. 00:21:43.895 [2024-04-24 19:52:25.234571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.234841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.895 [2024-04-24 19:52:25.234867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.235052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.235229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.235254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.235428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.235617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.235648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.235840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.236002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.236027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.236185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.236360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.236385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.236568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.236714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.236740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.236899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.237058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.237082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.237266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.237427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.237451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.237658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.237819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.237844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.238037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.238232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.238279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.238483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.238680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.238705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.238870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.239073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.239098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.239275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.239464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.239489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.239666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.239820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.239845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.240026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.240232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.240256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.240434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.240587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.240611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.240788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.241002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.241029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.241210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.241389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.241413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.241599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.241766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.241792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.241973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.242192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.242227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.242431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.242589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.896 [2024-04-24 19:52:25.242614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.896 qpair failed and we were unable to recover it. 00:21:43.896 [2024-04-24 19:52:25.242782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.242924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.242949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.243192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.243398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.243423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.243582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.243762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.243792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.243954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.244103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.244127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.244308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.244485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.244510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.244685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.244880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.244908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.245079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.245263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.245288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.245466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.245646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.245683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.245838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.246017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.246041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.246255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.246440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.246466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.246651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.246810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.246835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.247003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.247196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.247220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.247385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.247544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.247569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.247738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.247910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.247935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.248118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.248320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.248346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.248509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.248694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.248720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.248879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.249062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.249087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.249264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.249418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.249442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.249655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.249827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.249852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.250035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.250235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.250262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.250465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.250620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.250651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.250851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.251067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.251091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.251298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.251448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.251472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.251647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.251845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.251870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.252029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.252212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.252237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.897 [2024-04-24 19:52:25.252412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.252590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.897 [2024-04-24 19:52:25.252615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.897 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.252777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.252930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.252955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.253112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.253320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.253345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.253502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.253713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.253739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.253889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.254067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.254092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.254334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.254512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.254537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.254715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.254892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.254917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.255068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.255216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.255242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.255438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.255619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.255653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.255822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.255983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.256012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.256169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.256356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.256381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.256562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.256739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.256765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.256934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.257091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.257116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.257305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.257460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.257485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.257683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.257870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.257910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.258114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.258272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.258299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.258478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.258684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.258710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.258894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.259081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.259106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.259288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.259496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.259521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.259698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.259858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.259883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.260063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.260221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.260246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.260424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.260617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.260654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.260825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.261007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.261035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.261237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.261421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.261446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.261655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.261824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.261850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.898 qpair failed and we were unable to recover it. 00:21:43.898 [2024-04-24 19:52:25.262018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.898 [2024-04-24 19:52:25.262202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.262230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.262470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.262665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.262702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.262852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.263010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.263034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.263218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.263410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.263439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.263602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.263795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.263821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.263969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.264150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.264175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.264328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.264550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.264575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.264766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.264927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.264952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.265136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.265297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.265322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.265502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.265682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.265708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.265895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.266073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.266098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.266255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.266463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.266490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.266726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.266881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.266906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.267143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.267291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.267333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.267571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.267780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.267809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.267978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.268143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.268171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.268373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.268600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.268626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.268823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.268993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.269022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.269216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.269412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.269440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.269657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.269858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.269886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.270115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.270295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.270320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.270525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.270736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.270765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.270959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.271143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.271170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.899 qpair failed and we were unable to recover it. 00:21:43.899 [2024-04-24 19:52:25.271353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.899 [2024-04-24 19:52:25.271502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.271526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.271716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.271903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.271932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.272151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.272354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.272381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.272557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.272770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.272796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.272983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.273211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.273239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.273443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.273613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.273649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.273867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.274078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.274105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.274275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.274473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.274500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.274675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.274877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.274913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.275115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.275317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.275345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.275551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.275746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.275775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.276001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.276182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.276211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.276422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.276595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.276643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.276899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.277056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.277081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.277227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.277410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.277434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.277593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.277793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.277819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.278002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.278178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.278203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.278354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.278558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.278585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.278834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.279025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.279050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.279202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.279382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.279406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.279595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.279802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.900 [2024-04-24 19:52:25.279828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.900 qpair failed and we were unable to recover it. 00:21:43.900 [2024-04-24 19:52:25.279982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.280207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.280248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.280462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.280646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.280680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.280891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.281074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.281100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.281299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.281478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.281503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.281662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.281860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.281886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.282121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.282304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.282329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.282488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.282659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.282702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.282867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.283049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.283074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.283265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.283458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.283483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.283693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.283879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.283904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.284127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.284340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.284370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.284531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.284734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.284760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.284940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.285143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.285168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.285356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.285499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.285524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.285717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.285872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.285897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.901 [2024-04-24 19:52:25.286057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.286236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.901 [2024-04-24 19:52:25.286261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.901 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.286448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.286592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.286616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.286776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.286962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.286986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.287164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.287350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.287375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.287564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.287752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.287778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.287943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.288117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.288146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.288327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.288507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.288531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.288736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.288892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.288917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.289103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.289263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.289288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.289495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.289675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.289700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.289913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.290121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.290145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.290303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.290456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.290480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.290666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.290850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.290874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.291039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.291251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.291276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.291461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.291640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.291666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.291875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.292031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.292056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.292204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.292356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.292381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.292595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.292794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.292820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.293013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.293189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.293214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.293387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.293542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.293567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.293747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.293927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.293951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.294112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.294294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.294319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.294473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.294655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.294681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.902 [2024-04-24 19:52:25.294878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.295018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.902 [2024-04-24 19:52:25.295043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.902 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.295200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.295380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.295404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.295557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.295736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.295762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.295930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.296111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.296136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.296318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.296523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.296548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.296702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.296863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.296888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.297077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.297253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.297277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.297485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.297642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.297668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.297898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.298047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.298071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.298224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.298374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.298400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.298584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.298763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.298789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.298939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.299115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.299140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.299324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.299480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.299505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.299661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.299853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.299879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.300065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.300234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.300258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.300442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.300654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.300679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.300867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.301047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.301073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.301235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.301412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.301437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.301593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.301782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.301808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.301996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.302145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.302169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.302343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.302518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.302543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.302728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.302905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.302930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.303111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.303289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.303314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.903 qpair failed and we were unable to recover it. 00:21:43.903 [2024-04-24 19:52:25.303505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.903 [2024-04-24 19:52:25.303672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.303698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.303916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.304066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.304091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.304267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.304427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.304453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.304665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.304845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.304870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.305050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.305237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.305262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.305458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.305666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.305691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.305876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.306054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.306078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.306292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.306472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.306497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.306686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.306844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.306868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.307053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.307221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.307245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.307428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.307582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.307612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.307799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.307956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.307982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.308168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.308349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.308373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.308556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.308733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.308761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.308920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.309126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.309151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.309354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.309534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.309559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.309708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.309891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.309916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.310101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.310282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.310307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.310512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.310664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.310690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.310873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.311058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.311083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.311270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.311417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.311442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.311652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.311835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.311860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.312046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.312203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.312227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.312436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.312596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.312620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.904 qpair failed and we were unable to recover it. 00:21:43.904 [2024-04-24 19:52:25.312833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.904 [2024-04-24 19:52:25.313024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.313049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.313229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.313378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.313403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.313582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.313773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.313798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.313989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.314143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.314168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.314350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.314523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.314548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.314747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.314914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.314938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.315085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.315292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.315317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.315468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.315647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.315673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.315855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.316056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.316081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.316270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.316453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.316478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.316640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.316824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.316849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.317003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.317209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.317234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.317416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.317560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.317585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.317800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.317951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.317978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.318125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.318335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.318360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.318545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.318726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.318752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.318932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.319090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.319116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.319306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.319484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.319509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.319667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.319865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.319890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.320074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.320255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.320279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.320498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.320683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.320708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.320902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.321082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.321107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.321293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.321469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.321494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.321674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.321862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.321887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.905 qpair failed and we were unable to recover it. 00:21:43.905 [2024-04-24 19:52:25.322099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.905 [2024-04-24 19:52:25.322275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.322300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.322465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.322680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.322706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.322915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.323125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.323149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.323326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.323488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.323513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.323694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.323877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.323901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.324061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.324240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.324265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.324421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.324626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.324657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.324840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.325024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.325049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.325256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.325403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.325427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.325636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.325833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.325858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.326079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.326260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.326285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.326495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.326677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.326703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.326914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.327091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.327115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.327321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.327501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.327530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.327736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.327918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.327943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.328150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.328353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.328377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.328579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.328770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.328795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.328976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.329153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.329178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.329327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.329479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.329504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.329688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.329898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.329922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.330107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.330268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.330292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.330471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.330621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.330665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.330849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.331003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.331028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.331210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.331395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.331420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.331611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.331802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.906 [2024-04-24 19:52:25.331828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.906 qpair failed and we were unable to recover it. 00:21:43.906 [2024-04-24 19:52:25.332015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.332223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.332248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.332406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.332546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.332571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.332745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.332926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.332952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.333163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.333340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.333365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.333573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.333738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.333763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.333921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.334066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.334091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.334274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.334431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.334456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.334665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.334818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.334843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.335049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.335259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.335284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.335500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.335682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.335707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.335889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.336096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.336121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.336299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.336446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.336472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.336680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.336860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.336885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.337038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.337210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.337235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.337390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.337569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.337593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.337750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.337902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.337926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.338137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.338347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.338372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.338556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.338731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.338757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.907 qpair failed and we were unable to recover it. 00:21:43.907 [2024-04-24 19:52:25.338909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.339063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.907 [2024-04-24 19:52:25.339089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.339301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.339451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.339478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.339642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.339796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.339821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.339977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.340195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.340220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.340369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.340584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.340609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.340781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.340962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.340986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.341169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.341318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.341343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.341528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.341688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.341713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.341892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.342051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.342076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.342281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.342434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.342459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.342677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.342857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.342882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.343061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.343246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.343271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.343421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.343601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.343626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.343812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.343974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.343998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.344157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.344374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.344399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.344579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.344789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.344814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.344999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.345205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.345230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.345408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.345583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.345607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.345772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.345933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.345961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.346164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.346377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.346402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.346558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.346718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.346743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.346922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.347106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.347135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.347316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.347496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.347521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.347727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.347908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.347933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.348115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.348331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.348356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.348533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.348739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.348765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.348977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.349153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.349177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.349360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.349514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.349540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.349747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.349908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.349933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.350110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.350289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.350313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.908 qpair failed and we were unable to recover it. 00:21:43.908 [2024-04-24 19:52:25.350528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.908 [2024-04-24 19:52:25.350735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.350760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.350939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.351113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.351137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.351317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.351524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.351548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.351701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.351885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.351910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.352128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.352307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.352332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.352489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.352662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.352688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.352862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.353033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.353058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.353233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.353413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.353437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.353621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.353840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.353865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.354043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.354248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.354273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.354451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.354654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.354680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.354840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.355015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.355040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.355201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.355380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.355404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.355594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.355764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.355791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.355999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.356185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.356210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.356391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.356584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.356608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.356806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.356984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.357010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.357190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.357369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.357394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.357573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.357736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.357761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.357919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.358103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.358128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.358310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.358495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.358520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.358704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.358866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.358891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.359053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.359202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.359227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.359407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.359614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.359646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.359837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.360019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.360044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.360227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.360435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.360459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.909 qpair failed and we were unable to recover it. 00:21:43.909 [2024-04-24 19:52:25.360639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.909 [2024-04-24 19:52:25.360798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.360823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.360980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.361154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.361179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.361340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.361491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.361515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.361692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.361901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.361925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.362077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.362237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.362262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.362466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.362646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.362671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.362823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.362993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.363020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.363199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.363351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.363376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.363525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.363711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.363737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.363926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.364109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.364133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.364342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.364496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.364520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.364682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.364870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.364894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.365058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.365204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.365229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.365416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.365584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.365609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.365780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.365943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.365970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.367250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.367484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.367511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.367700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.367862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.367893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.368077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.368231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.368256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.368443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.368593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.368618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.368808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.369026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.369051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.369228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.369412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.369437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.369614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.369800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.369825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.370015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.370182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.370221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.370424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.370578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.370603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.370820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.370970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.370995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.371172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.371326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.371350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.371504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.371681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.371712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.371891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.372073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.372098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.372252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.372429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.372454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.372616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.372811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.910 [2024-04-24 19:52:25.372837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.910 qpair failed and we were unable to recover it. 00:21:43.910 [2024-04-24 19:52:25.372999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.911 [2024-04-24 19:52:25.373180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.911 [2024-04-24 19:52:25.373205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.911 qpair failed and we were unable to recover it. 00:21:43.911 [2024-04-24 19:52:25.373358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.911 [2024-04-24 19:52:25.373542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.911 [2024-04-24 19:52:25.373567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.911 qpair failed and we were unable to recover it. 00:21:43.911 [2024-04-24 19:52:25.373733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.911 [2024-04-24 19:52:25.373910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.911 [2024-04-24 19:52:25.373935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.911 qpair failed and we were unable to recover it. 00:21:43.911 [2024-04-24 19:52:25.374089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.911 [2024-04-24 19:52:25.374241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.911 [2024-04-24 19:52:25.374266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:43.911 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.374454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.374655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.374682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.374843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.375023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.375047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.375231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.375411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.375437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.375622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.375782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.375807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.375964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.376118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.376143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.376356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.376540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.376565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.376717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.376872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.376899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.377079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.377234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.377259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.377435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.377615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.377646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.377819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.377967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.377992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.190 [2024-04-24 19:52:25.378196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.378375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.190 [2024-04-24 19:52:25.378400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.190 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.378578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.378764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.378790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.378947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.379132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.379157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.379315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.379500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.379524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.379711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.379868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.379894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.380076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.380262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.380286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.380479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.380690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.380717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.380873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.381053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.381080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.381285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.381467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.381493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.381672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.381856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.381881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.382034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.382215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.382240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.382450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.382652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.382678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.382858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.383036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.383060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.383243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.383459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.383485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.383639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.383822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.383847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.384054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.384238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.384263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.384438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.384614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.384647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.384858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.385036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.385060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.385242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.385425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.385450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.385605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.385795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.385820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.385983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.386126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.386150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.386329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.386505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.386529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.386720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.386901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.386926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.387105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.387271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.387297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.387505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.387656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.387681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.387893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.388056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.388080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.388288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.388444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.388469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.388651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.388806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.388831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.389038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.389213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.389238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.191 [2024-04-24 19:52:25.389398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.389576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.191 [2024-04-24 19:52:25.389600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.191 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.389793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.390000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.390024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.390179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.390357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.390382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.390557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.390746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.390771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.390928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.391109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.391141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.391330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.391540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.391565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.391723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.391905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.391930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.392088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.392266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.392292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.392503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.392664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.392690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.392849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.393027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.393052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.393234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.393415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.393440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.393651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.393805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.393829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.393980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.394188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.394212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.394397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.394577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.394602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.394790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.394947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.394972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.395160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.395367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.395392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.395566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.395723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.395750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.395959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.396135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.396159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.396312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.396485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.396510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.396745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.396923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.396947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.397156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.397310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.397351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.397570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.397722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.397748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.397956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.398134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.398159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.398341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.398551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.398576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.398756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.398936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.398960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.399152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.399338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.399363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.399543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.399703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.399729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.399905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.400082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.400107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.400301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.400484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.400509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.192 qpair failed and we were unable to recover it. 00:21:44.192 [2024-04-24 19:52:25.400666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.400844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.192 [2024-04-24 19:52:25.400869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.401054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.401197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.401221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.401379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.401586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.401611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.401803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.401957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.401982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.402166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.402347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.402373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.402579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.402795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.402821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.403005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.403192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.403217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.403394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.403570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.403595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.403799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.403984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.404008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.404171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.404374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.404399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.404604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.404790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.404815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.404999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.405147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.405172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.405376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.405521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.405546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.405711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.405906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.405930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.406135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.406284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.406309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.406469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.406654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.406680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.406858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.407041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.407066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.407271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.407477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.407502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.407684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.407870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.407895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.408079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.408228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.408252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.408442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.408623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.408655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.408811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.408988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.409013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.409194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.409401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.409426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.409609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.409818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.409843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.410002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.410212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.410237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.410416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.410598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.410623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.410812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.410963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.410992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.411144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.411326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.411351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.411529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.411714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.411740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.193 qpair failed and we were unable to recover it. 00:21:44.193 [2024-04-24 19:52:25.411949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.412126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.193 [2024-04-24 19:52:25.412151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.412304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.412486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.412510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.412667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.412828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.412853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.413040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.413223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.413248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.413451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.413657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.413683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.413877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.414039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.414063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.414248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.414450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.414474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.414635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.414788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.414813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.414965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.415150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.415176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.415369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.415576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.415601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.415793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.415939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.415964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.416145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.416353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.416378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.416596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.416796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.416821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.417004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.417185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.417210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.417371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.417544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.417569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.417759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.417966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.417991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.418175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.418384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.418408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.418620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.418829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.418854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.419042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.419202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.419226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.419404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.419607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.419647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.419814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.419997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.420021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.420202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.420380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.420405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.420608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.420800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.420825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.420983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.421168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.421193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.421372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.421576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.421601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.421790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.421944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.421969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.422123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.422333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.422357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.422562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.422705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.422731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.422918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.423097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.423122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.194 [2024-04-24 19:52:25.423301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.423454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.194 [2024-04-24 19:52:25.423479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.194 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.423640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.423847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.423872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.424081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.424262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.424288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.424492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.424648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.424673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.424860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.425068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.425092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.425241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.425424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.425449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.425636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.425843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.425868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.426030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.426205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.426230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.426384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.426590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.426615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.426807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.426995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.427020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.427210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.427389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.427413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.427571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.427727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.427753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.427962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.428173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.428198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.428416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.428571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.428597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.428798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.428992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.429016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.429172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.429389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.429413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.429614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.429785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.429811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.430018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.430203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.430229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.430448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.430602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.430632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.430809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.430990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.431019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.431239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.431412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.431437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.431612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.431806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.431831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.432019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.432232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.432257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.195 qpair failed and we were unable to recover it. 00:21:44.195 [2024-04-24 19:52:25.432434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.195 [2024-04-24 19:52:25.432609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.432640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.432848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.433028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.433052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.433238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.433422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.433447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.433596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.433788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.433814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.434018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.434225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.434250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.434433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.434620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.434651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.434841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.434995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.435024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.435207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.435390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.435415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.435605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.435818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.435844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.436026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.436185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.436210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.436419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.436590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.436615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.436789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.436982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.437006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.437183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.437363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.437388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.437581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.437792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.437818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.437978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.438162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.438188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.438371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.438554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.438579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.438769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.438930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.438956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.439145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.439324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.439349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.439522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.439730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.439756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.439952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.440156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.440180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.440364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.440521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.440545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.440750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.440910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.440937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.441093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.441273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.441297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.441505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.441650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.441685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.441944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.442150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.442174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.442334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.442490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.442514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.442696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.442856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.442880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.443094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.443272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.443296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.443502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.443686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.196 [2024-04-24 19:52:25.443711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.196 qpair failed and we were unable to recover it. 00:21:44.196 [2024-04-24 19:52:25.443888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.444090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.444114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.444258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.444514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.444539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.444748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.444904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.444929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.445113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.445255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.445280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.445464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.445617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.445649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.445835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.446007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.446031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.446187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.446392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.446416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.446594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.446752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.446777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.446928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.447147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.447172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.447354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.447532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.447557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.447742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.448002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.448027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.448232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.448414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.448439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.448620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.448883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.448907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.449094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.449250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.449274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.449535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.449796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.449822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.449998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.450202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.450227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.450413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.450565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.450590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.450789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.450944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.450968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.451121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.451276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.451301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.451518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.451669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.451694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.451874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.452052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.452076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.452286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.452471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.452496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.452706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.452915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.452939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.453100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.453306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.453331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.453481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.453664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.453689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.453868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.454050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.454075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.454236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.454396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.454422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.454641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.454790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.454816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.454973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.455155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.197 [2024-04-24 19:52:25.455185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.197 qpair failed and we were unable to recover it. 00:21:44.197 [2024-04-24 19:52:25.455366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.455548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.455573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.455736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.455997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.456023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.456179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.456365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.456390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.456569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.456730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.456755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.457013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.457192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.457216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.457391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.457543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.457568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.457749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.457956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.457981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.458167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.458318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.458343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.458528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.458713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.458738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.458927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.459104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.459129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.459317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.459500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.459525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.459733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.459916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.459941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.460199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.460378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.460402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.460585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.460770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.460795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.460955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.461133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.461157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.461337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.461514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.461539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.461746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.461906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.461931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.462140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.462315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.462340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.462501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.462694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.462719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.462898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.463079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.463104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.463317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.463466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.463491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.463676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.463859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.463883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.464089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.464271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.464296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.464489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.464641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.464667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.464860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.465049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.465074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.465280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.465460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.465487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.465667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.465817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.465842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.466029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.466236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.466261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.198 qpair failed and we were unable to recover it. 00:21:44.198 [2024-04-24 19:52:25.466441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.198 [2024-04-24 19:52:25.466623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.466654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.466844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.466997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.467022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.467184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.467372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.467397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.467553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.467706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.467732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.467886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.468030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.468054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.468240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.468424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.468448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.468644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.468821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.468846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.469006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.469181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.469206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.469413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.469597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.469621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.469789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.469971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.469996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.470172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.470358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.470384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.470592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.470748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.470774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.470956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.471124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.471148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.471336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.471522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.471547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.471731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.471884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.471909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.472069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.472273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.472298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.472482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.472686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.472711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.472869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.473072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.473097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.473275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.473490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.473514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.473659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.473843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.473868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.474031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.474224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.474248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.474427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.474583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.474608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.474796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.474958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.474988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.475140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.475344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.475369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.199 qpair failed and we were unable to recover it. 00:21:44.199 [2024-04-24 19:52:25.475548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.199 [2024-04-24 19:52:25.475728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.475754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.475909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.476093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.476118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.476296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.476452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.476477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.476654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.476830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.476855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.477013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.477200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.477224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.477407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.477614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.477644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.477827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.477979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.478005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.478184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.478366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.478391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.478546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.478727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.478753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.478947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.479171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.479195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.479362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.479536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.479562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.479764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.479950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.479977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.480138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.480300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.480328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.480536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.480716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.480742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.480907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.481114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.481139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.481321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.481527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.481552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.481737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.481948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.481973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.482192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.482371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.482398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.482552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.482767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.482794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.482990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.483175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.483202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.483384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.483567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.483592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.483793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.483996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.484021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.484206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.484394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.484420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.484609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.484802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.484828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.485016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.485201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.485226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.485434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.485620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.485654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.485847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.486028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.486055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.486239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.486394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.486420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.200 [2024-04-24 19:52:25.486601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.486781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.200 [2024-04-24 19:52:25.486807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.200 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.487011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.487271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.487297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.487475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.487659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.487686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.487862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.488024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.488049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.488228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.488410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.488434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.488593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.488855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.488880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.489062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.489245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.489285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.489502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.489684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.489709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.489893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.490075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.490099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.490280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.490460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.490485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.490690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.490846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.490871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.491055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.491241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.491267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.491453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.491637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.491662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.491864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.492060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.492084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.492279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.492465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.492490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.492700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.492881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.492906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.493094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.493243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.493268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.493440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.493634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.493659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.493864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.494013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.494038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.494219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.494397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.494421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.494567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.494716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.494742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.494928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.495121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.495150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.495334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.495517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.495542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.495734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.495935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.495960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.496114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.496291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.496315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.496497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.496680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.496706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.496860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.497065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.497089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.497265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.497472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.497497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.497670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.497828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.497853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.201 [2024-04-24 19:52:25.498034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.498242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.201 [2024-04-24 19:52:25.498267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.201 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.498411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.498562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.498587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.498780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.498959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.498983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.499170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.499325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.499350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.499530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.499680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.499707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.499890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.500042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.500067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.500247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.500451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.500476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.500654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.500859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.500884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.501028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.501180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.501205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.501384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.501599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.501624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.501785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.501968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.501992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.502151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.502322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.502346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.502493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.502676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.502701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.502855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.503028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.503053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.503229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.503407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.503432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.503615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.503767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.503792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.504001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.504178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.504203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.504374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.504522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.504547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.504726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.504906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.504931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.505115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.505296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.505321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.505508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.505669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.505695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.505877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.506046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.506071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.506257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.506441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.506466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.506654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.506862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.506887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.507048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.507203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.507228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.507403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.507588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.507613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.507823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.507993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.508018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.508180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.508362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.508387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.508596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.508792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.508818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.202 qpair failed and we were unable to recover it. 00:21:44.202 [2024-04-24 19:52:25.509000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.202 [2024-04-24 19:52:25.509186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.509211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.509363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.509568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.509593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.509809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.509984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.510009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.510196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.510378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.510404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.510584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.510770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.510796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.510973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.511160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.511185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.511367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.511545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.511570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.511731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.511886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.511911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.512113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.512271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.512296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.512478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.512650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.512677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.512859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.513065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.513090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.513251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.513431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.513456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.513638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.513820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.513845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.514024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.514209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.514234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.514412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.514617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.514651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.514864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.515068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.515093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.515241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.515445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.515470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.515643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.515852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.515876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.516028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.516198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.516223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.516407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.516587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.516612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.516831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.517011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.517036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.517200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.517358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.517384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.517563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.517744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.517769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.517956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.518131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.518156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.518336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.518547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.518576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.518729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.518910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.518935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.519127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.519307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.519331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.519488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.519675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.519701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.519864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.520054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.203 [2024-04-24 19:52:25.520079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.203 qpair failed and we were unable to recover it. 00:21:44.203 [2024-04-24 19:52:25.520229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.520415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.520440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.520601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.520760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.520786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.520996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.521173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.521198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.521403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.521610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.521640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.521788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.521985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.522010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.522223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.522406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.522431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.522644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.522817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.522842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.522997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.523204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.523229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.523412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.523596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.523620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.523801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.523983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.524007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.524189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.524394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.524419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.524579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.524758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.524785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.524933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.525117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.525142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.525322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.525504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.525529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.525715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.525864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.525889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.526095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.526251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.526275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.526436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.526616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.526649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.526835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.527044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.527068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.527226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.527373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.527398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.527577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.527734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.527759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.527965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.528149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.528173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.528320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.528473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.528499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.528679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.528888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.528913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.204 [2024-04-24 19:52:25.529071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.529224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.204 [2024-04-24 19:52:25.529248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.204 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.529430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.529615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.529647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.529824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.529978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.530003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.530185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.530339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.530364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.530522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.530725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.530751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.530931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.531090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.531114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.531294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.531454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.531479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.531666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.531822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.531846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.532000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.532172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.532196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.532370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.532551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.532575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.532737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.532886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.532911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.533094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.533276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.533302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.533487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.533673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.533698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.533862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.534014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.534039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.534215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.534390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.534414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.534640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.534828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.534853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.535057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.535278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.535302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.535488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.535693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.535719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.535871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.536056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.536081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.536245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.536426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.536451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.536605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.536796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.536821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.536980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.537161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.537186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.537364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.537570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.537595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.537821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.537983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.538012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.538191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.538398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.538422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.538642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.538829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.538854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.539031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.539179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.539204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.539426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.539584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.539609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.539781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.539962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.539987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.205 qpair failed and we were unable to recover it. 00:21:44.205 [2024-04-24 19:52:25.540144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.205 [2024-04-24 19:52:25.540297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.540321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.540478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.540670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.540695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.540858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.541035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.541060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.541242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.541401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.541426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.541597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.541757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.541783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.541965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.542113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.542138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.542460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.542667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.542693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.542872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.543086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.543111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.543285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.543504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.543529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.543710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.543891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.543916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.544100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.544284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.544308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.544490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.544644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.544669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.544833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.545023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.545047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.545222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.545377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.545402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.545614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.545791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.545816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.546007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.546194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.546219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.546403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.546606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.546638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.546798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.546979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.547004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.547175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.547331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.547356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.547503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.547683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.547709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.547870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.548043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.548068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.548241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.548445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.548470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.548626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.548793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.548817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.548997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.549172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.549197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.549386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.549569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.549594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.549766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.549948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.549973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.550156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.550337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.550362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.550516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.550730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.550756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.550940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.551122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.206 [2024-04-24 19:52:25.551147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.206 qpair failed and we were unable to recover it. 00:21:44.206 [2024-04-24 19:52:25.551301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.551519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.551543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.551729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.551940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.551965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.552123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.552268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.552293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.552450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.552636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.552661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.552850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.553004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.553028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.553216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.553394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.553418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.553569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.553783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.553809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.553996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.554151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.554176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.554358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.554539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.554564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.554751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.554960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.554985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.555189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.555344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.555369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.555577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.555794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.555819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.555999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.556181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.556206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.556367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.556554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.556579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.556743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.556918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.556943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.557103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.557295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.557320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.557505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.557690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.557720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.557890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.558044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.558070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.558258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.558413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.558438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.558651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.558859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.558884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.559077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.559289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.559314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.559497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.559683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.559709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.559872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.560077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.560101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.560251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.560421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.560447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.560624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.560845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.560870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.561025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.561230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.561255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.561407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.561594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.561618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.561819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.561999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.562024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.207 qpair failed and we were unable to recover it. 00:21:44.207 [2024-04-24 19:52:25.562229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.562387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.207 [2024-04-24 19:52:25.562412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.562596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.562788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.562813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.562972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.563125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.563149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.563358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.563526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.563550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.563758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.563941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.563965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.564146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.564327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.564352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.564534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.564722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.564748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.564921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.565074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.565098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.565259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.565434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.565459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.565673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.565856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.565881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.566068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.566215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.566239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.566401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.566609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.566640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.566798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.566956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.566982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.567194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.567338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.567363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.567587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.567779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.567805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.567967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.568171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.568196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.568355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.568541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.568565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.568774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.568932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.568958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.569113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.569269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.569294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.569462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.569639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.569666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.569878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.570055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.570079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.570241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.570389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.570413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.570571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.570779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.570804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.570982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.571135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.571159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.571366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.571537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.571562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.571745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.571930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.571955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.572137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.572310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.572335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.572516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.572695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.572721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.572903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.573087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.573111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.208 qpair failed and we were unable to recover it. 00:21:44.208 [2024-04-24 19:52:25.573290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.573456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.208 [2024-04-24 19:52:25.573481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.573663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.573821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.573848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.574059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.574272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.574297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.574480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.574689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.574715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.574901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.575049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.575074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.575269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.575474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.575498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.575673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.575879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.575903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.576086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.576245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.576270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.576435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.576644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.576669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.576851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.577010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.577034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.577212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.577407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.577436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.577652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.577832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.577857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.578007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.578163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.578188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.578395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.578548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.578572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.578751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.578909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.578933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.579085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.579269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.579294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.579472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.579651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.579677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.579858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.580040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.580064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.580257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.580434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.580458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.580618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.580775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.580801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.581008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.581185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.581214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.581379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.581559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.581584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.581748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.581906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.581931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.582117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.582290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.209 [2024-04-24 19:52:25.582315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.209 qpair failed and we were unable to recover it. 00:21:44.209 [2024-04-24 19:52:25.582528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.582696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.582722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.582903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.583062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.583087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.583242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.583428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.583453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.583640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.583851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.583875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.584063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.584243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.584268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.584423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.584607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.584639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.584787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.584968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.584993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.585189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.585398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.585422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.585642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.585793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.585818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.585971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.586152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.586178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.586366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.586564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.586589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.586763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.586940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.586964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.587148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.587336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.587361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.587564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.587746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.587772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.587956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.588135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.588160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.588340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.588522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.588546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.588704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.588885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.588912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.589090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.589261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.589285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.589461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.589671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.589696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.589878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.590030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.590055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.590265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.590407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.590432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.590597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.590760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.590786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.590997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.591182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.591207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.591384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.591560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.591584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.591780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.591963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.591988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.592167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.592321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.592345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.592506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.592682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.592708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.592895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.593086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.593111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.593325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.593483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.210 [2024-04-24 19:52:25.593509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.210 qpair failed and we were unable to recover it. 00:21:44.210 [2024-04-24 19:52:25.593695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.593904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.593928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.594081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.594285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.594310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.594493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.594702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.594728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.594886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.595036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.595061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.595267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.595470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.595495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.595647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.595827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.595852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.596007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.596209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.596234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.596400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.596605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.596639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.596801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.596981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.597006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.597215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.597401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.597426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.597587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.597774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.597799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.597961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.598137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.598161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.598366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.598519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.598543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.598725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.598877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.598903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.599081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.599254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.599278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.599464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.599650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.599677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.599891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.600069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.600095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.600282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.600437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.600462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.600643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.600800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.600831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.600995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.601179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.601203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.601410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.601566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.601592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.601780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.601956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.601982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.602160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.602316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.602341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.602551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.602733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.602759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.602913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.603090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.603114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.603259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.603429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.603454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.603662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.603847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.603872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.604059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.604236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.604260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.604419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.604576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.211 [2024-04-24 19:52:25.604600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.211 qpair failed and we were unable to recover it. 00:21:44.211 [2024-04-24 19:52:25.604763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.604915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.604939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.605089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.605266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.605291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.605478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.605665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.605690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.605847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.606026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.606051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.606233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.606410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.606435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.606611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.606810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.606835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.606989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.607193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.607218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.607394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.607578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.607602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.607793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.607965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.607990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.608137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.608345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.608370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.608529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.608720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.608746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.608898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.609052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.609078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.609286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.609463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.609488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.609700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.609868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.609893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.610081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.610242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.610266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.610445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.610635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.610661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.610872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.611018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.611043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.611229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.611383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.611408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.611616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.611788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.611814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.611999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.612159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.612184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.612397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.612554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.612579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.612736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.612942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.612967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.613143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.613293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.613318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.613529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.613714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.613740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.613889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.614047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.614072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.614257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.614414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.614438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.614639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.614841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.614866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.615055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.615268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.615293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.615451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.615658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.615684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.615864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.616020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.616045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.212 qpair failed and we were unable to recover it. 00:21:44.212 [2024-04-24 19:52:25.616232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.212 [2024-04-24 19:52:25.616386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.616411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.616565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.616720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.616746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.616906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.617089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.617114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.617292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.617499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.617523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.617687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.617863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.617888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.618069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.618273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.618298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.618478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.618659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.618684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.618836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.619017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.619041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.619224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.619407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.619432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.619581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.619763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.619788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.619993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.620180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.620209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.620397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.620547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.620574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.620784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.620934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.620959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.621164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.621369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.621394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.621571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.621750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.621776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.621960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.622167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.622192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.622403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.622604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.622639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.622788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.622935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.622960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.623143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.623320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.623344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.623572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.623728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.623753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.623937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.624141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.624165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.624348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.624537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.624561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.624741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.624921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.624946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.625127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.625334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.625359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.625543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.625717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.625743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.625893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.626069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.626095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.626255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.626430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.213 [2024-04-24 19:52:25.626454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.213 qpair failed and we were unable to recover it. 00:21:44.213 [2024-04-24 19:52:25.626606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.626762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.626787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.626950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.627153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.627177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.627365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.627510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.627534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.627742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.627924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.627949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.628136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.628292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.628317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.628492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.628709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.628735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.628879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.629057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.629082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.629288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.629499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.629524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.629688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.629840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.629864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.630047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.630196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.630222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.630381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.630591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.630616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.630781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.630932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.630956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.631110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.631289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.631313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.631495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.631715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.631741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.631913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.632067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.632092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.632250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.632430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.632455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.632612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.632782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.632808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.632998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.633208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.633233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.633451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.633649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.633676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.633883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.634064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.634089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.634268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.634446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.634471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.634652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.634808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.634834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.635018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.635199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.635224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.635401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.635555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.635580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.635755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.635943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.635969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.636152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.636328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.636353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.636574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.636748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.636773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.636956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.637133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.637157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.637315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.637468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.637492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.637703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.637870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.637895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.638075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.638257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.638283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.214 qpair failed and we were unable to recover it. 00:21:44.214 [2024-04-24 19:52:25.638438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.214 [2024-04-24 19:52:25.638638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.638664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.638844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.639025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.639050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.639230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.639403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.639428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.639610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.639809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.639839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.640022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.640204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.640229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.640405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.640607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.640639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.640824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.640983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.641008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.641198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.641379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.641404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.641584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.641736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.641762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.641905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.642053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.642077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.642234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.642415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.642439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.642620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.642801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.642826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.642971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.643117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.643141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.643317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.643474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.643498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.643708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.643892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.643917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.644098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.644251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.644278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.644474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.644663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.644689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.644840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.645020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.645045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.645224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.645372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.645397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.645548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.645698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.645724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.645911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.646088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.646113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.646271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.646453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.646477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.646686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.646895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.646919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.647098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.647267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.647292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.647446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.647598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.647622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.647825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.647981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.648006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.648198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.649028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.649058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.649248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.649410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.649435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.649612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.649821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.649847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.650039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.650231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.650256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.650430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.650612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.650646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.215 qpair failed and we were unable to recover it. 00:21:44.215 [2024-04-24 19:52:25.650855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.215 [2024-04-24 19:52:25.651040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.651065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.651219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.651365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.651390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.651567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.651722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.651748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.651974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.652128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.652153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.652332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.652488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.652513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.652694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.652871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.652898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.653083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.653237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.653262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.653442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.653638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.653674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.653857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.654038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.654063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.654267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.654411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.654437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.654598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.654766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.654792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.655005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.655212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.655237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.655420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.655638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.655665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.655903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.656091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.656117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.656299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.656485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.656511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.656697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.656860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.656885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.657090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.657244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.657269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.657474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.657700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.657726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.657931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.658085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.658109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.658295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.658479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.658504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.658675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.658851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.658876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.659037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.659216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.659240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.659393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.659551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.659576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.659787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.659966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.660003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.660151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.660299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.660323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.660508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.660699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.660726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.660879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.661035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.661060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.661226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.661441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.661466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.661647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.661833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.661858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.662018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.662194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.662219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.662377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.662522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.662547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.216 [2024-04-24 19:52:25.662729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.662905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.216 [2024-04-24 19:52:25.662930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.216 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.663077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.663228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.663254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.663434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.663585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.663615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.663830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.663978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.664003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.664157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.664306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.664331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.664525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.664711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.664737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.664916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.665076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.665101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.665286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.665462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.665487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.665700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.665880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.665905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.666112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.666266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.666291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.666465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.666693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.666718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.666900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.667080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.667104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.667289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.667465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.667490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.667680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.667860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.667885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.668076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.668259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.668284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.668443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.668635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.668661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.668828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.669015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.669039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.669220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.669400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.669425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.669641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.669830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.669855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.670021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.670224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.670249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.670433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.670591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.670617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.670793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.670977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.671002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.671157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.671340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.671365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.671541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.671745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.671771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.671980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.672187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.672212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.672419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.672560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.672585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.672807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.672987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.217 [2024-04-24 19:52:25.673011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.217 qpair failed and we were unable to recover it. 00:21:44.217 [2024-04-24 19:52:25.673188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.673333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.673358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.673545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.673726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.673752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.673957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.674138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.674163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.674370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.674547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.674572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.674755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.674937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.674962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.675169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.675351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.675376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.675551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.675737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.675763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.675913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.676072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.676097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.676305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.676487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.676513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.676702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.676882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.676907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.677083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.677288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.677313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.677467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.677654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.677679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.677863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.678030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.678055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.678216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.678388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.678412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.678606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.678808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.678833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.679026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.679172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.679197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.679374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.679566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.679591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.679796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.680009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.680034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.680215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.680373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.680400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.680588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.680793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.680819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.681003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.681220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.681245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.681397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.681617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.681651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.681813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.682020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.682044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.682204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.682361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.682385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.682540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.682747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.682773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.682961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.683163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.683188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.683367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.683567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.683596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.683757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.683914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.683938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.684097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.684277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.684303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.684488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.684676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.684703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.684890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.685076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.685100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.685283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.685433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.218 [2024-04-24 19:52:25.685458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.218 qpair failed and we were unable to recover it. 00:21:44.218 [2024-04-24 19:52:25.685608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.685820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.685846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.686046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.686201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.686226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.686376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.686527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.686552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.686701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.686890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.686915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.687124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.687315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.687340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.687532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.687690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.687716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.687894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.688081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.688105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.688278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.688459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.688484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.688669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.688858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.688884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.689068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.689254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.689278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.689455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.689661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.689687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.689838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.689989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.690014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.690195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.690406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.690430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.690614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.690786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.690811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.690980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.691158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.691182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.691374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.691550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.691574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.495 qpair failed and we were unable to recover it. 00:21:44.495 [2024-04-24 19:52:25.691731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.495 [2024-04-24 19:52:25.691888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.691913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.692089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.692239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.692264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.692471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.692620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.692652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.692845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.692999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.693024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.693179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.693351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.693376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.693554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.693701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.693727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.693933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.694093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.694117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.694271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.694462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.694488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.694684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.694869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.694894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.695079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.695285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.695310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.695455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.695641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.695667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.695820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.696006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.696030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.696212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.696367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.696391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.696578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.696732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.696758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.696919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.697083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.697109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.697294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.697485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.697510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.697672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.697825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.697850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.698007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.698240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.698265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.698476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.698681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.698707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.698861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.699025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.699050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.699231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.699408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.699433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.699620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.699808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.699833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.700009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.700155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.700180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.700362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.700545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.700570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.700724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.700876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.700901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.701086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.701266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.701293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.701498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.701683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.701709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.701920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.702099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.702124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.496 qpair failed and we were unable to recover it. 00:21:44.496 [2024-04-24 19:52:25.702277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.702459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.496 [2024-04-24 19:52:25.702483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.702661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.702815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.702844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.703001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.703161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.703188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.703394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.703575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.703601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.703766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.703921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.703948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.704173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.704362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.704387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.704566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.704727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.704754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.704941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.705095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.705119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.705301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.705458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.705483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.705654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.705837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.705863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.706055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.706235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.706259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.706402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.706556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.706581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.706773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.706957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.706982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.707138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.707322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.707347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.707557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.707750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.707776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.707958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.708120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.708144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.708322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.708499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.708523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.708707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.708867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.708892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.709078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.709259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.709285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.709466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.709676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.709702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.709886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.710067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.710093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.710254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.710411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.710437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.710633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.710788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.710813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.710966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.711142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.711166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.711347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.711527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.711553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.711715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.711897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.711922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.712068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.712271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.712296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.712475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.712664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.712689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.712849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.712998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.713023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.497 qpair failed and we were unable to recover it. 00:21:44.497 [2024-04-24 19:52:25.713208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.713359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.497 [2024-04-24 19:52:25.713384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.713568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.713753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.713779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.713962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.714111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.714137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.714361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.714515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.714540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.714735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.714943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.714968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.715152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.715308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.715333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.715485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.715669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.715695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.715902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.716109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.716133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.716297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.716508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.716533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.716741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.716923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.716948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.717105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.717262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.717286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.717445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.717625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.717658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.717840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.718016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.718041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.718237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.718422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.718447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.718650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.718827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.718852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.719007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.719213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.719237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.719398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.719578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.719602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.719775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.719931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.719956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.720112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.720294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.720319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.720470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.720650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.720676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.720838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.721021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.721046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.721232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.721436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.721461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.721654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.721811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.721836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.721986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.722167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.722197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.722377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.722534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.722574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.722766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.722916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.722942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.723157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.723367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.723391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.723575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.723759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.723785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.723974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.724188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.724214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.498 qpair failed and we were unable to recover it. 00:21:44.498 [2024-04-24 19:52:25.724397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.498 [2024-04-24 19:52:25.724550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.724575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.724763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.724968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.724993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.725202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.725380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.725404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.725585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.725779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.725805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.725966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.726140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.726169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.726326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.726501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.726525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.726707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.726884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.726909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.727129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.727334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.727358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.727562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.727745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.727770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.727930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.728120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.728144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.728352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.728538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.728563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.728734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.728913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.728938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.729122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.729326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.729350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.729527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.729682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.729708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.729893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.730074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.730099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.730290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.730474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.730499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.730682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.730832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.730857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.731049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.731207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.731231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.731436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.731609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.731639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.731838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.732019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.732044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.732224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.732403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.732428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.732639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.732833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.732858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.733061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.733266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.733290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.733496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.733687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.733713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.733892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.734093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.734118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.734277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.734437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.734462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.499 qpair failed and we were unable to recover it. 00:21:44.499 [2024-04-24 19:52:25.734613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.734806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.499 [2024-04-24 19:52:25.734832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.734993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.735198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.735223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.735376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.735543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.735568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.735759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.735939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.735964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.736159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.736335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.736360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.736522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.736682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.736708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.736896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.737078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.737103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.737257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.737418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.737443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.737655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.737801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.737826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.737986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.738186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.738212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.738419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.738603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.738634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.738791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.738947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.738972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.739148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.739307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.739332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.739518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.739702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.739728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.739910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.740099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.740124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.740308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.740460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.740484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.740670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.740845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.740869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.741044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.741226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.741251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.741457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.741664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.741690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.741868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.742128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.742153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.742334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.742519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.742544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.742724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.742873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.742898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.743050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.743305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.743330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.743515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.743701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.743726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.500 [2024-04-24 19:52:25.743910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.744059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.500 [2024-04-24 19:52:25.744085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.500 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.744270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.744477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.744502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.744683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.744869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.744894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.745055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.745229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.745254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.745401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.745580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.745604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.745809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.745991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.746020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.746201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.746383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.746408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.746559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.746738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.746764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.746939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.747093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.747120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.747305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.747483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.747508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.747692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.747872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.747898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.748111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.748268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.748295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.748505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.748713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.748738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.748902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.749085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.749110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.749293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.749478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.749503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.749709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.749893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.749918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.750106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.750285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.750309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.750493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.750671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.750697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.750871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.751050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.751075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.751280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.751440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.751464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.751642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.751797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.751822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.751986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.752138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.752164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.752371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.752554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.752579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.752740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.752897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.752922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.753080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.753260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.753285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.753496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.753680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.753706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.753913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.754091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.754116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.754305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.754482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.754507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.754689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.754875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.754900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.501 qpair failed and we were unable to recover it. 00:21:44.501 [2024-04-24 19:52:25.755085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.501 [2024-04-24 19:52:25.755262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.755287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.755492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.755674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.755699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.755877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.756041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.756067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.756220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.756391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.756416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.756612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.756800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.756826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.757003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.757177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.757202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.757393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.757574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.757599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.757807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.757988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.758012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.758198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.758379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.758403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.758584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.758769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.758794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.759004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.759156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.759181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.759385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.759538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.759563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.759719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.759895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.759920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.760118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.760273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.760298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.760504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.760682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.760707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.760900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.761085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.761110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.761293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.761486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.761511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.761714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.761869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.761894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.762049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.762251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.762276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.762456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.762646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.762671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.762853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.763059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.763083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.763258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.763418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.763442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.763594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.763783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.763808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.764015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.764168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.764192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.764351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.764544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.764568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.764750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.764927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.764952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.765112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.765262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.765287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.765481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.765661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.765691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.765872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.766057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.766081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.502 qpair failed and we were unable to recover it. 00:21:44.502 [2024-04-24 19:52:25.766241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.502 [2024-04-24 19:52:25.766444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.766469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.766636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.766789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.766814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.767018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.767229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.767254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.767465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.767670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.767695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.767851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.768025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.768051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1779542 Killed "${NVMF_APP[@]}" "$@" 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.768230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.768434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.768459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.768652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 19:52:25 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:21:44.503 [2024-04-24 19:52:25.768810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.768836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.769010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 19:52:25 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:44.503 [2024-04-24 19:52:25.769157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.769183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.769335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 19:52:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:44.503 [2024-04-24 19:52:25.769539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.769565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.769722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 19:52:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:44.503 [2024-04-24 19:52:25.769909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.769935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 19:52:25 -- common/autotest_common.sh@10 -- # set +x 00:21:44.503 [2024-04-24 19:52:25.770122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.770335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.770360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.770538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.770764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.770789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.770991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.771174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.771201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.771413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.771592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.771617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.771804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.771995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.772019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.772201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.772384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.772409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.772563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.772774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.772800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.772955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.773118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.773145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.773356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.773513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.773537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.773727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.773911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.773937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.774119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.774303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.774328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 [2024-04-24 19:52:25.774489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 19:52:25 -- nvmf/common.sh@470 -- # nvmfpid=1780101 00:21:44.503 19:52:25 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:44.503 [2024-04-24 19:52:25.774643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.774680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 19:52:25 -- nvmf/common.sh@471 -- # waitforlisten 1780101 00:21:44.503 [2024-04-24 19:52:25.774831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 19:52:25 -- common/autotest_common.sh@817 -- # '[' -z 1780101 ']' 00:21:44.503 [2024-04-24 19:52:25.775045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.775070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 19:52:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.503 [2024-04-24 19:52:25.775227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.775377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.775402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.503 qpair failed and we were unable to recover it. 00:21:44.503 19:52:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:44.503 [2024-04-24 19:52:25.775553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 19:52:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.503 [2024-04-24 19:52:25.775763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.503 [2024-04-24 19:52:25.775789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 19:52:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:44.504 19:52:25 -- common/autotest_common.sh@10 -- # set +x 00:21:44.504 [2024-04-24 19:52:25.776008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.776197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.776222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.776649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.776817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.776844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.777038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.777219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.777244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.777401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.777561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.777586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.777751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.777939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.777964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.778145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.778329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.778354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.778505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.778691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.778718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.778908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.779064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.779089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.779242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.779395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.779421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.779603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.779769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.779794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.779948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.780141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.780165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.780357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.780536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.780561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.780730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.780935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.780960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.781147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.781301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.781327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.781486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.781671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.781698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.781862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.782043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.782067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.782220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.782402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.782427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.782602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.782781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.782807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.782959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.783137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.783162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.783322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.783473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.783498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.783703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.783891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.783917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.504 [2024-04-24 19:52:25.784074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.784236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.504 [2024-04-24 19:52:25.784261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.504 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.784450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.784638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.784665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.784821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.784976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.785001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.785187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.785371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.785395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.785575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.785759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.785786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.785969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.786176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.786201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.786364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.786518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.786544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.786696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.786845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.786870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.787054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.787206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.787231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.787415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.787591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.787616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.787787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.787973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.787998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.788181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.788384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.788408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.788556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.788712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.788738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.788896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.789087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.789112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.789271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.789455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.789480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.789642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.789834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.789859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.790040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.790217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.790242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.790437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.790617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.790649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.790864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.791035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.791060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.791237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.791397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.791422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.791581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.791737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.791767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.791947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.792107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.792132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.792289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.792474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.792499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.792704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.792854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.792880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.793061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.793251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.793275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.793457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.793614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.793656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.793827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.794022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.794047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.794252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.794431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.505 [2024-04-24 19:52:25.794457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.505 qpair failed and we were unable to recover it. 00:21:44.505 [2024-04-24 19:52:25.794645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.794825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.794849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.795001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.795181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.795207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.795363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.795567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.795592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.795820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.796000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.796025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.796239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.796394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.796419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.796623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.796783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.796809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.796997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.797150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.797175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.797361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.797536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.797560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.797712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.797894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.797919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.798113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.798296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.798321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.798508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.798689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.798714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.798898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.799094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.799119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.799299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.799450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.799475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.799663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.799816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.799841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.800018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.800171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.800196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.800377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.800537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.800562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.800745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.800904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.800929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.801135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.801311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.801336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.801494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.801656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.801682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.801843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.802050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.802075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.802250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.802453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.802479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.802640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.802781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.802806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.802963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.803169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.803194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.803352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.803554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.803579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.803825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.803992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.804018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.804202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.804355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.804380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.804577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.804793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.804819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.804975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.805124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.805149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.805333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.805491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.506 [2024-04-24 19:52:25.805516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.506 qpair failed and we were unable to recover it. 00:21:44.506 [2024-04-24 19:52:25.805668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.805823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.805848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.806032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.806209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.806234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.806385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.806593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.806618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.806788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.806965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.806990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.807146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.807306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.807332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.807512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.807670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.807697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.807957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.808148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.808173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.808332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.808514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.808539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.808720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.808877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.808902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.809054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.809313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.809338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.809521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.809710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.809736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.809917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.810175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.810199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.810387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.810566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.810590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.810796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.810947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.810973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.811176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.811376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.811405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.811560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.811732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.811758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.811935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.812111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.812136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.812323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.812507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.812531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.812697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.812858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.812883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.813066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.813271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.813296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.813459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.813666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.813692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.813851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.814055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.814080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.814275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.814481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.814506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.814657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.814814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.814839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.814989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.815207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.815231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.815408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.815559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.815584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.815740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.815920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.815945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.816108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.816289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.507 [2024-04-24 19:52:25.816314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.507 qpair failed and we were unable to recover it. 00:21:44.507 [2024-04-24 19:52:25.816498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.816691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.816717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.816905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.817083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.817108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.817268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.817431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.817455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.817657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.817813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.817838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.818013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.818153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.818178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.818364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.818556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.818581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.818767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.818960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.818985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.819143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.819322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.819347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.819539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.819740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.819766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.819921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.820102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.820126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.820331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.820513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.820538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.820689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.820844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.820870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.821024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.821206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.821231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.821392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.821547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.821572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.821779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.821936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.821961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.822115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.822266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.822291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.822453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.822655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.822681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.822887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.822918] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:44.508 [2024-04-24 19:52:25.822998] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.508 [2024-04-24 19:52:25.823092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.823117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.823314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.823460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.823485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.823639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.823899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.823925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.824107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.824302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.824327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.824536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.824739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.824765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.824923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.825074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.825099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.825281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.825439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.825464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.825656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.825836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.825862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.826047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.826202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.826228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.826451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.826662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.826693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.508 qpair failed and we were unable to recover it. 00:21:44.508 [2024-04-24 19:52:25.826876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.827027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.508 [2024-04-24 19:52:25.827053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.827259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.827416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.827441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.827602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.827793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.827819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.827973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.828155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.828180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.828322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.828503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.828528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.828737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.828890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.828917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.829102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.829282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.829307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.829485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.829670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.829697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.829851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.830033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.830059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.830319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.830491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.830520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.830677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.830833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.830859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.831016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.831196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.831220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.831377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.831561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.831586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.831749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.831935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.831960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.832144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.832324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.832349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.832525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.832727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.832753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.832931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.833123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.833148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.833304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.833462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.833487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.833677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.833837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.833862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.834023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.834173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.834199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.834422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.834635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.834661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.834822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.834998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.835023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.835179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.835360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.835385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.509 qpair failed and we were unable to recover it. 00:21:44.509 [2024-04-24 19:52:25.835561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.509 [2024-04-24 19:52:25.835710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.835735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.835915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.836093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.836118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.836300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.836480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.836505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.836701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.836876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.836901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.837096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.837277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.837302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.837483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.837661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.837687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.837875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.838050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.838075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.838230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.838437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.838462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.838644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.838793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.838818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.838967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.839118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.839144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.839320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.839502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.839527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.839724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.839908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.839933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.840110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.840268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.840293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.840446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.840604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.840636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.840816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.840978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.841004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.841190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.841396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.841421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.841565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.841720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.841747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.841936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.842095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.842120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.842310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.842467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.842492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.842676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.842877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.842903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.843080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.843262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.843287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.843467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.843653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.843679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.843858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.844018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.844044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.844246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.844429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.844454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.844609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.844799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.844825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.845043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.845191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.845215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.845409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.845567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.845592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.845803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.845956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.845981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.510 qpair failed and we were unable to recover it. 00:21:44.510 [2024-04-24 19:52:25.846163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.510 [2024-04-24 19:52:25.846324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.846349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.846559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.846715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.846741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.846899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.847085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.847110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.847264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.847478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.847503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.847691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.847872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.847897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.848084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.848292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.848317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.848497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.848694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.848719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.848904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.849090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.849115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.849300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.849483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.849508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.849719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.849906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.849935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.850114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.850293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.850320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.850475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.850626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.850660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.850807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.850987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.851011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.851168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.851316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.851341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.851499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.851665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.851691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.851877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.852076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.852101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.852285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.852499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.852524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.852684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.852884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.852910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.853053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.853237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.853263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.853471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.853652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.853678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.853866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.854123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.854148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.854309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.854466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.854491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.854681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.854848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.854873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.855059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.855238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.855263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.855419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.855595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.855620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.855785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.855957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.855981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.856160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.856383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.856408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.856571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.856757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.856782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.511 [2024-04-24 19:52:25.856937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.857094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.511 [2024-04-24 19:52:25.857120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.511 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.857271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.857451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.857476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.857668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.857849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.857874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.858092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.858272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.858298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.858474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.858634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.858660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.858842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.859026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.859051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.859239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.859399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.859424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.859611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.859797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.859823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.860000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.860178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.860202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.860357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.860545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.860569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.860725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.860879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.860904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.861100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.861303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.861328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.861514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.861704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.861730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.861914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.862123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.862147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.862361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.862537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.862562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.862773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.862926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.862953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.863108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.863315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.863340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.863513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.863698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.863724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.863921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.864120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.864146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.512 [2024-04-24 19:52:25.864330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.864483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.864508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.864661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.864813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.864838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.865017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.865222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.865247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.865433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.865617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.865648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.865838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.866018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.866043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.866230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.866385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.866409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.866569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.866746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.866771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.866922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.867072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.867097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.867283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.867469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.867494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.512 qpair failed and we were unable to recover it. 00:21:44.512 [2024-04-24 19:52:25.867705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.512 [2024-04-24 19:52:25.867869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.867895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.868088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.868273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.868298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.868504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.868686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.868712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.868930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.869087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.869114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.869302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.869521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.869547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.869712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.869874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.869899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.870108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.870263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.870290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.870497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.870660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.870686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.870877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.871087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.871113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.871269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.871422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.871447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.871626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.871824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.871849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.872032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.872193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.872219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.872400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.872579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.872604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.872794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.872975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.873000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.873205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.873461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.873492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.873670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.873825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.873851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.874046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.874269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.874294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.874479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.874639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.874665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.874850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.875031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.875055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.875266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.875413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.875438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.875613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.875784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.875809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.875985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.876168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.876193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.876372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.876550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.876575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.876776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.876993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.877018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.877234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.877382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.877407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.877594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.877766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.877792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.877978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.878184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.878209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.878393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.878580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.878604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.513 [2024-04-24 19:52:25.878806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.878961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.513 [2024-04-24 19:52:25.878986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.513 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.879164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.879348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.879373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.879554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.879738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.879763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.879942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.880162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.880186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.880344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.880517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.880542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.880724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.880880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.880905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.881087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.881245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.881270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.881470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.881654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.881679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.881869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.882046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.882070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.882256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.882412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.882436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.882646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.882834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.882859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.883043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.883192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.883217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.883367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.883572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.883596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.883786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.883946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.883971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.884120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.884302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.884327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.884590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.884783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.884809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.884968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.885171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.885195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.885380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.885557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.885582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.885742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.885890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.885916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.886091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.886348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.886372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.886555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.886739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.886764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.886974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.887134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.887160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.887342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.887600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.887625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.514 qpair failed and we were unable to recover it. 00:21:44.514 [2024-04-24 19:52:25.887852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.514 [2024-04-24 19:52:25.888057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.888082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.888241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.888422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.888448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.888607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.888788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.888814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.889007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.889155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.889180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.889372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.889558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.889582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.889772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.889920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.889946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.890097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.890275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.890300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.890483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.890640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.890666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.890849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.891056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.891081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.891276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.891429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.891455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.891618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.891815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.891840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.892000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.892184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.892208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.892421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.892567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.892592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.892750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.892931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.892956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.893139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.893325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.893354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.893536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.893790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.893816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.893997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.894207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.894232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.894445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.894655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.894695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.894903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.895075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.895099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.895256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.895433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.895457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.895722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.895911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.895936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.896142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.896299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.896325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.896507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.896689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.896715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.896893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.897077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.897102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.897320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.897514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.897539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.897723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.897877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.897902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.898063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.898242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.898267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.898491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.898697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.898723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.898940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.899091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.899116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.515 qpair failed and we were unable to recover it. 00:21:44.515 [2024-04-24 19:52:25.899300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.515 [2024-04-24 19:52:25.899479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.899503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.899696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.899903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.899928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.900143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.900302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.900327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.900507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.900657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.900683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.900892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.901068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.901092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.901277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.901453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.901478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.901663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.901824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.901850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.902040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.902246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.902271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.902454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.902651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.902676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.902860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.903047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.903073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.903246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.903397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.903423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.903603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.903764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.903790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.903949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.904124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.904149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.904324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.904525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.904550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.904745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.904927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.904953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.905137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.905342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.905367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.905525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.905709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.905735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.905913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.906099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.906124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.906332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.906491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.906516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.906725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.906877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.906902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.907077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.907264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.907289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.907442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.907592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.907617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.907786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.907945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.907971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.908156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.908350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.908375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.908584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.908748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.908773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.908981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.909158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.909183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.909363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.909547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.909572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.909786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.909960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.909985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.516 qpair failed and we were unable to recover it. 00:21:44.516 [2024-04-24 19:52:25.910147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.516 [2024-04-24 19:52:25.910331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.910357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.910537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.910729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.910755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.910934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.911088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.911114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.911375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.911584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.911609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.911762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.911919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.911943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.912125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.912311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.912336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.912495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.912679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.912705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.912917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.913112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.913136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.913311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.913501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.913530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.913739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.913915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.913940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.914123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.914299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.914324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.914511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.914685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.914711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.914894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.915095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.915119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.915292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.915474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.915499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.915683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.915866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.915891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.916069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.916272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.916298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.916456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.916611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.916642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.916807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.917010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.917034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.917181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.917335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.917379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.917597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.917805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.917831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.918020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.918179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.918204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.918385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.918542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.918568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.918750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.918955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.918980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.919135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.919317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.919341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.919531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.919713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.919739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.919897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.920073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.920097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.920301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.920482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.920506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.517 [2024-04-24 19:52:25.920531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:44.517 [2024-04-24 19:52:25.920727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.920911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.517 [2024-04-24 19:52:25.920935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.517 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.921086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.921277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.921307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.921487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.921672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.921698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.921883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.922091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.922115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.922326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.922509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.922534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.922687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.922861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.922886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.923052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.923244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.923269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.923454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.923606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.923636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.923848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.924042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.924066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.924281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.924457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.924481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.924697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.924881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.924906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.925088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.925272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.925297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.925462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.925651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.925677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.925866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.926027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.926053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.926203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.926347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.926372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.926516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.926725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.926751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.926937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.927148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.927173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.927353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.927507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.927531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.927712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.927872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.927898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.928085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.928295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.928320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.928502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.928712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.928738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.928932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.929120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.929145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.929303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.929467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.929492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.929640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.929799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.929825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.930006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.930181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.930206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.930388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.930541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.930567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.930725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.930874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.930899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.518 qpair failed and we were unable to recover it. 00:21:44.518 [2024-04-24 19:52:25.931078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.931378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.518 [2024-04-24 19:52:25.931403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.931587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.931778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.931803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.932091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.932291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.932315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.932530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.932690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.932715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.932896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.933073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.933097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.933287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.933452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.933476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.933678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.933840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.933866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.934050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.934258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.934283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.934498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.934646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.934672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.934998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.935182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.935207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.935394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.935577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.935602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.935790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.935944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.935969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.936186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.936342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.936368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.936560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.936745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.936770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.936981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.937193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.937218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.937385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.937602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.937626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.937850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.938010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.938034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.938218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.938398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.938423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.938644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.938825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.938850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.939036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.939215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.939239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.939393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.939578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.939603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.939815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.939973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.939998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.940149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.940359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.940384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.519 qpair failed and we were unable to recover it. 00:21:44.519 [2024-04-24 19:52:25.940562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.519 [2024-04-24 19:52:25.940754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.940780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.940931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.941103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.941128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.941344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.941563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.941592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.941756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.941921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.941946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.942130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.942309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.942334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.942536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.942688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.942714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.942926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.943110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.943136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.943314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.943524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.943548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.943763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.943925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.943950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.944156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.944341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.944366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.944548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.944745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.944771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.944953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.945131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.945156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.945346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.945497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.945524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.945685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.945903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.945927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.946147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.946329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.946354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.946532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.946756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.946782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.946967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.947118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.947143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.947296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.947481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.947505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.947693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.947888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.947913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.948096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.948271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.948296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.948478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.948663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.948689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.948847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.948999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.949024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.949233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.949452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.949476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.949691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.949846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.949870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.950083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.950292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.950317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.950495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.950656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.950682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.950873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.951039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.951064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.951268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.951449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.951475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.520 qpair failed and we were unable to recover it. 00:21:44.520 [2024-04-24 19:52:25.951641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.520 [2024-04-24 19:52:25.951828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.951853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.952056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.952204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.952228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.952386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.952539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.952564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.952743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.952929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.952954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.953132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.953323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.953347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.953533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.953747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.953773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.953960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.954117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.954142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.954331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.954533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.954558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.954741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.954946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.954970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.955124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.955281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.955306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.955460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.955661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.955687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.955866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.956058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.956083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.956259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.956442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.956467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.956651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.956805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.956830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.957020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.957175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.957200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.957354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.957537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.957562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.957738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.957898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.957923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.958098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.958283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.958308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.958488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.958643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.958669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.958828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.959001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.959025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.959220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.959375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.959400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.959583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.959789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.959814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.960024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.960209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.960234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.960383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.960538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.960562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.960765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.960950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.960975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.961135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.961313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.961342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.961525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.961705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.961731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.521 [2024-04-24 19:52:25.961886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.962035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.521 [2024-04-24 19:52:25.962060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.521 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.962210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.962392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.962417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.962565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.962783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.962809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.962997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.963213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.963237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.963415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.963575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.963600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.963795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.963976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.964000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.964183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.964374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.964399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.964584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.964737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.964762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.964945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.965127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.965152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.965343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.965504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.965529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.965720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.965903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.965928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.966138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.966317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.966342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.966498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.966684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.966711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.966896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.967074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.967099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.967282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.967482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.967507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.967704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.967858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.967882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.968061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.968214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.968240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.968436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.968620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.968659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.968844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.969005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.969030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.969233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.969442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.969467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.969672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.969829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.969856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.970072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.970279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.970306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.970489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.970708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.970734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.970923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.971079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.971104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.971288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.971445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.971472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.971657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.971862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.971889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.972079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.972288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.972314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.972498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.972692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.972719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.522 qpair failed and we were unable to recover it. 00:21:44.522 [2024-04-24 19:52:25.972904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.522 [2024-04-24 19:52:25.973089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.973116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.973329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.973485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.973510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.973694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.973878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.973906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.974066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.974223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.974250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.974429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.974615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.974649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.974835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.975019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.975046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.975199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.975386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.975413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.975568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.975753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.975780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.975968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.976121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.976146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.976362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.976504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.976529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.976716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.976903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.976929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.977114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.977313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.977338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.977495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.977691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.977718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.977906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.978063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.978088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.978266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.978442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.978466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.978660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.978842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.978867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.979025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.979173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.979198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.979356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.979531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.979556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.979741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.979893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.979917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.980069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.980279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.980304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.980466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.980675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.980701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.980915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.981128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.981157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.981334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.981520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.981545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.523 qpair failed and we were unable to recover it. 00:21:44.523 [2024-04-24 19:52:25.981733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.523 [2024-04-24 19:52:25.981946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.981971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.982158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.982308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.982334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.982518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.982678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.982703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.982857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.983049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.983073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.983254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.983402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.983427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.983640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.983798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.983824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.984011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.984216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.984241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.984401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.984581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.984605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.984837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.984986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.985015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.985167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.985369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.985394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.985610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.985804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.985829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.986037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.986212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.986239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.986457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.986636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.986662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.986810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.986963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.986988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.987145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.987322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.987347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.987525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.987707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.987732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.987914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.988098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.988123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.988325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.988534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.988559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.988742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.988924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.988949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.989101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.989285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.989310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.989489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.989647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.989673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.989883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.990038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.990064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.990212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.990419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.990444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.990632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.990788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.990813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.990978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.991190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.991215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.991394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.991548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.991574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.991755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.991930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.991955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.992166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.992322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.992346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.524 qpair failed and we were unable to recover it. 00:21:44.524 [2024-04-24 19:52:25.992505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.524 [2024-04-24 19:52:25.992709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.525 [2024-04-24 19:52:25.992735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.525 qpair failed and we were unable to recover it. 00:21:44.525 [2024-04-24 19:52:25.992926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.525 [2024-04-24 19:52:25.993111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.807 [2024-04-24 19:52:25.993136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.807 qpair failed and we were unable to recover it. 00:21:44.807 [2024-04-24 19:52:25.993295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.993473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.993499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.993685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.993842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.993867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.994028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.994183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.994208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.994361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.994511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.994536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.994686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.994860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.994885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.995064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.995224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.995250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.995444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.995592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.995617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.995782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.995998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.996023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.996183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.996394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.996419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.996605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.996792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.996818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.996978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.997153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.997177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.997333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.997485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.997509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.997719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.997906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.997931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.998124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.998340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.998365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.998524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.998703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.998728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.998883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.999042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.999066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.999223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.999402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.999427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:25.999613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.999809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:25.999834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.000024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.000234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.000259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.000451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.000661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.000687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.000868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.001039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.001064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.001226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.001403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.001428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.001585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.001749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.001774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.001954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.002138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.002163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.002308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.002466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.002491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.002692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.002894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.002919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.003101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.003281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.003306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.003487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.003816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.003842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.004031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.004213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.004238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.004527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.004709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.004739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.004922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.005102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.005127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.005289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.005495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.005520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.005678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.005885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.005911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.808 [2024-04-24 19:52:26.006092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.006242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.808 [2024-04-24 19:52:26.006267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.808 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.006446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.006626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.006657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.006838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.007041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.007066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.007226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.007412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.007437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.007615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.007798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.007822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.007979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.008134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.008159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.008336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.008520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.008545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.008712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.008896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.008921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.009077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.009260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.009285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.009466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.009678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.009704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.009866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.010155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.010180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.010342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.010551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.010576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.010741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.010930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.010954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.011138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.011315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.011340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.011492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.011675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.011701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.011886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.012042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.012067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.012279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.012463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.012488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.012657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.012843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.012868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.013027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.013177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.013202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.013352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.013544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.013569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.013756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.013934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.013959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.014114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.014297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.014322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.014532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.014715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.014741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.014922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.015105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.015135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.015321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.015528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.015553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.015739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.015920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.015946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.016157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.016337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.016362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.016551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.016747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.016773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.016981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.017134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.017160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.017345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.017508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.017546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.017766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.017928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.017953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.018149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.018297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.018322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.018529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.018690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.018719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.809 qpair failed and we were unable to recover it. 00:21:44.809 [2024-04-24 19:52:26.018913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.019096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.809 [2024-04-24 19:52:26.019121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.019307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.019488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.019513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.019703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.019885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.019911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.020094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.020277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.020302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.020511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.020709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.020736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.020918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.021099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.021124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.021303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.021505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.021530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.021691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.021884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.021909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.022114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.022305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.022329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.022512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.022734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.022760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.022971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.023145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.023169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.023353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.023509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.023534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.023746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.023905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.023930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.024080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.024235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.024261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.024416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.024622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.024659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.024834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.024990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.025015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.025194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.025379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.025403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.025559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.025708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.025734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.025898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.026076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.026101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.026285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.026492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.026517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.026717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.026864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.026889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.027086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.027272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.027297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.027480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.027659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.027685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.027839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.028020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.028045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.028229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.028389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.028414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.028642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.028854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.028879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.029058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.029246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.029271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.029422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.029604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.029638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.029834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.029988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.030013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.030166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.030337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.030362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.030530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.030717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.030744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.030916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.031113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.031138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.031323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.031495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.031520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.031718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.031903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.031928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.032102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.032252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.032277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.032461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.032617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.032658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.032831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.033011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.033036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.033212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.033367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.033392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.810 qpair failed and we were unable to recover it. 00:21:44.810 [2024-04-24 19:52:26.033546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.033729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.810 [2024-04-24 19:52:26.033756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.033962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.034134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.034158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.034312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.034494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.034519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.034670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.034855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.034881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.035058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.035207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.035232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.035447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.035633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.035659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.035846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.036028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.036052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.036271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.036446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.036471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.036655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.036836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.036862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.037041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.037221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.037246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.037406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.037599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.037624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.037848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.038002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.038027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.038238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.038396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.038421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.038616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.038809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.038834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.039023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.039199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.039224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.039381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.039565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.039590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.039754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.039934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.039959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.040118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.040325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.040350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.040511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.040719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.040745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.040902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.041059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.041084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.041278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.041457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.041482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.041636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.041816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.041841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.042023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.042179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.042204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.042384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.042564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.042588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.042759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.042911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.042936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.043113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.043293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.043317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.043500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.043685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.043729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.043914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.044065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.044093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.044276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.044464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.044489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.044654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.044865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.044890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.045071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.045229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.045255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.045403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.045615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.045648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.045809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.045966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.045991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.046196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.046348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.046373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.046524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.046685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.046710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.046887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.047036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.047061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.047222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.047404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.047429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.047608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.047762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.047787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.048008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.048191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.048218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.048373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.048531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.048557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.048719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.048904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.048929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.049112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.049273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.049298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.049491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.049654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.049679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.049832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.050013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.050038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.050194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.050349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.050374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.050559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.050737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.050763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.050917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.051091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.811 [2024-04-24 19:52:26.051116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.811 qpair failed and we were unable to recover it. 00:21:44.811 [2024-04-24 19:52:26.051266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.051473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.051498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.051660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.051816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.051841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.052026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.052171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.052196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.052359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.052516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.052541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.052716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.052897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.052922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.053071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.053251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.053276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.053455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.053646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.053672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.053863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.054020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.054046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.054235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.054416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.054441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.054596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.054793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.054819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.054999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.055181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.055205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.055402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.055597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.055622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.055785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.055948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.055972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.056185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.056365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.056390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.056545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.056729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.056755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.056943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.057098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.057123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.057294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.057509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.057534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.057746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.057927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.057952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.058129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.058277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.058302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.058458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.058639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.058665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.058828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.059039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.059064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.059221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.059378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.059403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.059566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.059720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.059747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.059899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.060081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.060107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.060262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.060412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.060437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.060639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.060828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.060853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.061035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.061182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.061207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.061367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.061524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.061551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.061721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.061873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.061898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.062084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.062301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.062326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.062505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.062657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.062682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.062867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.063052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.063081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.063270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.063424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.063449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.063603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.063762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.063788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.063966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.064144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.064168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.064322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.064478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.064503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.064683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.064841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.064866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.065049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.065228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.065253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.065407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.065596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.065621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.065817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.065997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.066022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.066208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.066383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.066408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.066622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.066816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.066846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.067001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.067148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.067173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.067327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.067510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.067534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.067700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.067880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.067905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.068092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.068274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.068299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.068482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.068650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.068676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.068835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.069043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.069069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.069222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.069421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.069446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.069605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.069792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.069818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.070001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.070187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.070213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.070401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.070585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.070610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.070808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.070964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.070989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.071172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.071354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.812 [2024-04-24 19:52:26.071379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.812 qpair failed and we were unable to recover it. 00:21:44.812 [2024-04-24 19:52:26.071591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.071758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.071784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.071944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.072096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.072121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.072315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.072473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.072499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.072676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.072825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.072850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.073001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.073150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.073176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.073343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.073505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.073532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.073717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.073714] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.813 [2024-04-24 19:52:26.073751] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.813 [2024-04-24 19:52:26.073765] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.813 [2024-04-24 19:52:26.073777] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.813 [2024-04-24 19:52:26.073787] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.813 [2024-04-24 19:52:26.073897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.073925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.073938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:44.813 [2024-04-24 19:52:26.074002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:44.813 [2024-04-24 19:52:26.074033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:44.813 [2024-04-24 19:52:26.074037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:44.813 [2024-04-24 19:52:26.074138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.074315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.074342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.074525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.074702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.074728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.074917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.075097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.075122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.075279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.075459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.075485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.075649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.075801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.075826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.076008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.076157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.076182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.076350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.076527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.076552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.076708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.076861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.076887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.077067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.077249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.077275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.077451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.077624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.077671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.077846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.078005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.078031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.078217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.078370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.078395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.078552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.078703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.078730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.078883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.079053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.079080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.079263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.079411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.079436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.079608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.079787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.079812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.079998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.080145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.080170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.080319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.080567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.080592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.080776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.080933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.080959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.081111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.081276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.081301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.081454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.081621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.081653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.081819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.081969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.081995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.082182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.082333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.082358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.082509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.082685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.082711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.082864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.083012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.083037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.083188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.083351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.083377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.083565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.083719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.083746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.083900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.084065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.084091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.084288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.084444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.084469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.084634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.084791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.084817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.085005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.085161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.085186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.085341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.085521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.085546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.085708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.085869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.085895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.086055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.086211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.086236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.086391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.086562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.086588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.086774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.087046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.087072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.087227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.087380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.087405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.087564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.087814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.087840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.088013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.088206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.088232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.088396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.088580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.088606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.088805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.089008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.089042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.089243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.089475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.813 [2024-04-24 19:52:26.089508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.813 qpair failed and we were unable to recover it. 00:21:44.813 [2024-04-24 19:52:26.089711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.089874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.089905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.090110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.090313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.090342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.090533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.090708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.090738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.090959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.091126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.091156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.091349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.091538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.091568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.091744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.091941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.091970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.092139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.092360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.092390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.092566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.092799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.092830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.093133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.093301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.093330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.093520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.093717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.093747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.096643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.096846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.096877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.097103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.097273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.097305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.097513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.097684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.097715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.097924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.098090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.098120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.098314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.098538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.098567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.098770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.098945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.098974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.099149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.099347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.099376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.099571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.099736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.099772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.099934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.100091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.100118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.100316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.100537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.100573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.100782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.100948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.100977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.101140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.101351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.101379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.101651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.101828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.101866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.102061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.102267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.102306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.102524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.102822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.102862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.103085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.103266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.103296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.103492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.103644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.103671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.103844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.104029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.104059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.104249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.104430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.104456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.104648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.104812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.104838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.105021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.105208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.105235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.105397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.105585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.105611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.105806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.105973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.105999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.106154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.106316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.106341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.106494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.106645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.106672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.106829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.106997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.107024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.107244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.107401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.107426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.107615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.107775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.107806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.107965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.108133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.108159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.108319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.108503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.108528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.108714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.108874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.108902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.109074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.109224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.109250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.109454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.109608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.109651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.109819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.109968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.109993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.110284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.110440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.110467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.110640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.110829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.110854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.111023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.111178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.111205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.111419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.111616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.111655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.111831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.111992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.112018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.112187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.112367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.112392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.112561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.112734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.112760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.112934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.113109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.113135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.814 qpair failed and we were unable to recover it. 00:21:44.814 [2024-04-24 19:52:26.113284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.113448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.814 [2024-04-24 19:52:26.113474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.113719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.113873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.113899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.114100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.114283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.114309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.114477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.114650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.114676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.114830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.115000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.115025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.115212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.115376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.115403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.115593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.115786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.115813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.115972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.116154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.116180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.116327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.116491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.116516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.116691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.116872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.116897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.117113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.117301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.117326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.117473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.117636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.117663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.117860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.118032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.118057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.118273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.118431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.118456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.118611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.118777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.118803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.118982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.119137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.119162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.119341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.119492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.119517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.119725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.119879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.119905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.120059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.120206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.120231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.120396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.120559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.120584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.120772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.120924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.120950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.121108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.121262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.121287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.121441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.121596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.121622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.121822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.121976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.122003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.122157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.122314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.122339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.122488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.122664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.122690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.122876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.123080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.123106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.123271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.123453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.123478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.123642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.123799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.123825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.123998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.124154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.124179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.124357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.124508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.124534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.124712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.124888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.124920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.125107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.125256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.125282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.125440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.125589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.125619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.125812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.125971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.125998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.126165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.126314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.126339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.126499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.126654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.126680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.126851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.127032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.127057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.127224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.127401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.127426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.127567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.127755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.127781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.127958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.128142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.128167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.128321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.128473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.128498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.128660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.128844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.128869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.129020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.129202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.129227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.129392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.129573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.129598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4438000b90 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.129662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16dd860 (9): Bad file descriptor 00:21:44.815 [2024-04-24 19:52:26.129939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.130117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.130151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.130320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.130497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.130522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.130696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.130879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.130905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.131091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.131264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.131289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.131521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.131686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.131712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.131903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.132083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.132109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.132384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.132530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.132555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.132717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.132866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.132891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.133152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.133349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.133374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.133557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.133714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.133739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.815 qpair failed and we were unable to recover it. 00:21:44.815 [2024-04-24 19:52:26.133894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.815 [2024-04-24 19:52:26.134043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.134067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.134240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.134395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.134422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.134607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.134776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.134802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.135088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.135297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.135322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.135494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.135647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.135673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.135833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.136012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.136036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.136216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.136372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.136397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.136595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.136747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.136773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.136925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.137079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.137113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.137292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.137458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.137492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.137669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.137832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.137857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.138021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.138179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.138204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.138384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.138536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.138561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.138749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.138926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.138951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.139109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.139263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.139287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.139439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.139595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.139620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.139786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.139934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.139959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.140130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.140280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.140305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.140483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.140638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.140663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.140817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.141014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.141039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.141196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.141342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.141375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.141526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.141706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.141732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.141915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.142071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.142096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.142264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.142429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.142453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.142617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.142799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.142824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.142992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.143148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.143173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.143364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.143519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.143544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.143707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.143959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.143990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.144180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.144366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.144390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.144541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.144700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.144726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.144880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.145048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.145072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.145258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.145423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.145448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.145654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.145876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.145901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.146089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.146263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.146288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.146463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.146610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.146640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.146826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.147002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.147026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.147175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.147353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.147377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.147524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.147732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.147778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.147934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.148096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.148121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.148286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.148432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.148458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.148604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.148862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.148887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.149078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.149357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.149386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.149594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.149778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.149806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.149968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.150117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.150141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.150291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.150447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.150472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.150667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.150851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.150877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.151025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.151205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.151230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.151419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.151571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.151596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.151799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.151950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.151975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.816 [2024-04-24 19:52:26.152154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.152337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.816 [2024-04-24 19:52:26.152362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.816 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.152550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.152728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.152754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.152942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.153100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.153126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.153288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.153451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.153476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.153652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.153819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.153844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.154028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.154175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.154199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.154355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.154522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.154546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.154730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.154882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.154908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.155100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.155266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.155291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.155487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.155641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.155667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.155814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.155991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.156016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.156178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.156359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.156384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.156543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.156718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.156744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.156943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.157135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.157160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.157338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.157518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.157544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.157713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.157923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.157955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.158108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.158257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.158282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.158466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.158653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.158691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.158852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.159031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.159056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.159215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.159365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.159392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.159560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.159717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.159743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.159929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.160086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.160113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.160289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.160433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.160458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.160612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.160779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.160805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.160966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.161160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.161185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.161388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.161567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.161592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.161785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.161930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.161956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.162148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.162327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.162351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.162494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.162647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.162684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.162859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.163036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.163062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.163213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.163410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.163435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.163593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.163775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.163801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.163980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.164142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.164168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.164321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.164532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.164557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.164722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.164873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.164900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.165082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.165275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.165300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.165460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.165654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.165688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.165855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.166038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.166063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.166242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.166394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.166419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.166606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.166800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.166826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.167005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.167185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.167210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.167357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.167507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.167531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.167725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.167909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.167944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.168139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.168292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.168322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.168518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.168698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.168724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.168886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.169061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.169086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.169257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.169433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.169458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.169666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.169827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.169854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.170068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.170264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.170289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.170440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.170590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.170617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.170811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.170965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.170991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.171141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.171321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.171346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.171551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.171712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.171739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.171896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.172073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.172098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.172283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.172492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.172518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.172677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.172861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.172886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.817 qpair failed and we were unable to recover it. 00:21:44.817 [2024-04-24 19:52:26.173029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.817 [2024-04-24 19:52:26.173203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.173228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.173374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.173535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.173560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.173732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.173879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.173904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.174056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.174206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.174232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.174391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.174549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.174574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.174724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.174901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.174926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.175104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.175248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.175273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.175482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.175643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.175670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.175867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.176066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.176091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.176247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.176395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.176420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.176604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.176774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.176800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.176977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.177172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.177197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.177375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.177556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.177581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.177806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.177955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.177981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.178166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.178328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.178353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.178529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.178722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.178749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.178930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.179077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.179102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.179278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.179463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.179489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.179688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.179871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.179896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.180047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.180195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.180221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.180380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.180530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.180555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.180727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.180902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.180926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.181080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.181284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.181309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.181517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.181698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.181723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.181900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.182082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.182106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.182275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.182419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.182444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.182601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.182763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.182790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.182938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.183147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.183172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.183355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.183509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.183535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.183681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.183827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.183852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.184031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.184204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.184229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.184409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.184558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.184584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.184769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.184971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.184996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.185196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.185362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.185386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.185566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.185738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.185764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.185911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.186082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.186107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.186253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.186436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.186461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.186616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.186773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.186798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.186995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.187203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.187231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.187382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.187542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.187567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.187759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.187936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.187962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.188111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.188263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.188288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.188435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.188585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.188609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.188773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.188930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.188955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.189159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.189306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.189331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.189504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.189689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.189715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.189880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.190068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.190093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.190241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.190393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.190417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.190616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.190775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.190804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.190991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.191167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.191192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.818 qpair failed and we were unable to recover it. 00:21:44.818 [2024-04-24 19:52:26.191373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.818 [2024-04-24 19:52:26.191552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.191577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.191796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.191978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.192004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.192159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.192325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.192350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.192530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.192701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.192727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.192883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.193061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.193086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.193251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.193452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.193478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.193636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.193830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.193856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.194032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.194206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.194230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.194430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.194608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.194639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.194803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.194956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.194981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.195133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.195287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.195311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.195501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.195675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.195701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.195854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.196061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.196086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.196253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.196434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.196459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.196615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.196802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.196828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.197003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.197154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.197179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.197383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.197554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.197579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.197727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.197933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.197959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.198136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.198318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.198343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.198517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.198687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.198713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.198860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.199036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.199061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.199237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.199388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.199413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.199588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.199744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.199769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.199960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.200133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.200158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.200334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.200515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.200540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.200726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.200875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.200901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.201054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.201199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.201224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.201370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.201529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.201554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.201706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.201852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.201877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.202034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.202188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.202215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.202387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 19:52:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:44.819 [2024-04-24 19:52:26.202571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.202598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 19:52:26 -- common/autotest_common.sh@850 -- # return 0 00:21:44.819 19:52:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:44.819 [2024-04-24 19:52:26.202787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 19:52:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:44.819 [2024-04-24 19:52:26.202934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.202960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 19:52:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.203142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.203307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.203332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.203511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.203653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.203682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.203861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.204024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.204050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.204224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.204390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.204416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.204615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.204783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.204808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.205005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.205174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.205199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.205380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.205524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.205554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.205705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.205884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.205910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.206113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.206306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.206332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.206518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.206701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.206727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.206908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.207074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.207099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.207251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.207442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.207467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.207689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.207866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.207890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.208079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.208235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.208260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.208443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.208585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.208610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.208853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.209013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.209037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.209204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.209359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.209384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.209597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.209766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.209792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.209999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.210205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.210230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.210388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.210562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.210586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.210752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.210896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.210921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.211106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.211261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.211286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.819 qpair failed and we were unable to recover it. 00:21:44.819 [2024-04-24 19:52:26.211436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.211582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.819 [2024-04-24 19:52:26.211608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.211801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.211959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.211984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.212158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.212308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.212333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.212500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.212661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.212698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.212875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.213042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.213067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.213249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.213443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.213468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.213615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.213830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.213857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.214010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.214193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.214218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.214401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.214550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.214575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.214765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.214916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.214947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.215104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.215287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.215312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.215493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.215678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.215703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.215886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.216061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.216086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.216246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.216422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.216447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.216645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.216822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.216847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.217024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.217200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.217226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.217410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.217592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.217618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.217791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.217940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.217965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.218110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.218311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.218336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.218491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.218640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.218665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.218847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.219007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.219032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.219178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.219373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.219398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.219580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.219733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.219759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.219916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.220067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.220092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.220245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.220450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.220474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.220621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.220835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.220861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.221032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.221183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.221208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.221383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.221531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.221556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.221708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.221858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.221882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.222031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.222215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.222240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.222388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.222530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.222555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.222734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.222893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.222925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.223077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.223226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.223251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.223405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.223581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.223606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.223774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.223925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.223949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.224134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 19:52:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.820 [2024-04-24 19:52:26.224284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.224324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.224507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.224659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 19:52:26 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:44.820 [2024-04-24 19:52:26.224695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 19:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.820 [2024-04-24 19:52:26.224875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 19:52:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.820 [2024-04-24 19:52:26.225031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.225056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.225243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.225383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.225408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.225573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.225769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.225795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.225943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.226118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.226143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.226302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.226452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.226477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.226650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.226829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.226854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.227017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.227185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.227210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.227364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.227514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.227538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.227702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.227910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.227935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.228118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.228295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.228320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.228502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.228652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.228678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.228875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.229052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.229077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.229222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.229400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.229425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.229596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.229784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.229809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.229993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.230147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.230172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.230348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.230493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.230517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.230700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.230874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.230899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.231043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.231195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.820 [2024-04-24 19:52:26.231221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.820 qpair failed and we were unable to recover it. 00:21:44.820 [2024-04-24 19:52:26.231373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.231535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.231560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.231717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.231897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.231922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.232094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.232244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.232269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.232456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.232605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.232650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.232834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.232978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.233002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.233154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.233319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.233344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.233516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.233709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.233735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.233888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.234076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.234101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.234279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.234428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.234453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.234636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.234803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.234828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.235039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.235204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.235229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.235521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.235674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.235709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.235870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.236031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.236055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.236237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.236415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.236440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.236615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.236783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.236809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.236969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.237156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.237181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.237389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.237561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.237586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.237793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.237991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.238016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.238167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.238347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.238372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.238527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.238717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.238743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.238930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.239115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.239144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.239354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.239512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.239536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.239689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.239873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.239898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.240076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.240255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.240280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.240447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.240653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.240678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.240830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.240988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.241013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.241196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.241341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.241366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.241546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.241713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.241739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.241918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.242100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.242125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.242301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.242452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.242477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.242663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.242812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.242837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.242992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.243204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.243229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.243388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.243534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.243559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.243707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.243877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.243902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.244107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.244278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.244303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.244459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.244664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.244699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.244873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.245034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.245059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.245242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.245452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.245476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.245633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.245825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.245851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.246010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.246157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.246183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.246337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.246480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.246505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.246671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.246826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.246852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.247014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.247167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.247191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.247374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.247525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.247550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.247723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.247872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.247897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.248112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.248290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.248314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.248522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.248705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.248730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.248910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 Malloc0 00:21:44.821 [2024-04-24 19:52:26.249069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.249093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.821 qpair failed and we were unable to recover it. 00:21:44.821 [2024-04-24 19:52:26.249246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.249397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.821 [2024-04-24 19:52:26.249422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 19:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.822 [2024-04-24 19:52:26.249575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 19:52:26 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:44.822 19:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.822 [2024-04-24 19:52:26.249750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.249775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 19:52:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.822 [2024-04-24 19:52:26.249937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.250132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.250161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.250308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.250516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.250540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.250703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.250884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.250909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.251066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.251213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.251238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.251422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.251574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.251598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.251788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.251988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.252017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.252164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.252315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.252339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.252514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.252649] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.822 [2024-04-24 19:52:26.252693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.252718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.252870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.253047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.253072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.253225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.253382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.253407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.253553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.253722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.253747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.253933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.254141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.254166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.254327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.254511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.254536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.254685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.254865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.254890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.255050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.255223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.255248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.255426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.255568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.255593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.255845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.256023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.256050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.256229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.256407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.256433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.256587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.256780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.256806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.256962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.257120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.257146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.257300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.257473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.257502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.257691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.257868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.257893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.258101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.258251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.258277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.258432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.258584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.258608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.258812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.258963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.258987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.259165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.259364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.259388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.259546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.259733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.259758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.259920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.260101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.260127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.260307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.260482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.260507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.260716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 19:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.822 [2024-04-24 19:52:26.260898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.260923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 19:52:26 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:44.822 [2024-04-24 19:52:26.261112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 19:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.822 19:52:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.822 [2024-04-24 19:52:26.261265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.261290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.261480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.261660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.261686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.261835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.261991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.262016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.262204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.262381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.262406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.262549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.262696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.262722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.262867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.263020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.263045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.263197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.263375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.263400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.263562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.263729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.263754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.263963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.264117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.264142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.264324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.264522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.264546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.264697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.264853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.264877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.265037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.265213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.265238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.265416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.265596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.265621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.265825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.265980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.266005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.266186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.266360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.266384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.266581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.266753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.266778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.266933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.267120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.267145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.267294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.267452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.267477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.267678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.267857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.267882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.268063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.268239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.268264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.268438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.268612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.268642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.822 qpair failed and we were unable to recover it. 00:21:44.822 [2024-04-24 19:52:26.268815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 19:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.822 [2024-04-24 19:52:26.268970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.822 [2024-04-24 19:52:26.268996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 19:52:26 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:44.823 19:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.823 [2024-04-24 19:52:26.269187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 19:52:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.823 [2024-04-24 19:52:26.269381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.269406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.269588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.269781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.269807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.269970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.270168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.270193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.270339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.270491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.270516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.270669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.270837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.270862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.271021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.271199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.271224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.271422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.271571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.271596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.271775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.271975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.271999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.272158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.272334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.272359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.272529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.272723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.272749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.272906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.273081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.273105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.273312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.273491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.273516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.273701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.273895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.273920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.274100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.274260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.274284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.274481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.274679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.274705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.274882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.275057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.275083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.275260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.275458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.275483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.275659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.275845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.275871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.276053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.276207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.276232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.276434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.276592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.276616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.276800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 19:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.823 [2024-04-24 19:52:26.276946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.276971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 19:52:26 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:44.823 [2024-04-24 19:52:26.277140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 19:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.823 19:52:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.823 [2024-04-24 19:52:26.277319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.277344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.277527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.277735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.277760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.277914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.278067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.278092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.278273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.278434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.278459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.278618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.278882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.278907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.279065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.279262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.279287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.279474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.279621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.279655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.279818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.279975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.280000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.280194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.280341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.280365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.280547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.280693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.823 [2024-04-24 19:52:26.280718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cff30 with addr=10.0.0.2, port=4420 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.280867] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.823 [2024-04-24 19:52:26.283402] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:44.823 [2024-04-24 19:52:26.283595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:44.823 [2024-04-24 19:52:26.283623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:44.823 [2024-04-24 19:52:26.283647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:44.823 [2024-04-24 19:52:26.283660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:44.823 [2024-04-24 19:52:26.283695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 19:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.823 19:52:26 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:44.823 19:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.823 19:52:26 -- common/autotest_common.sh@10 -- # set +x 00:21:44.823 19:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.823 19:52:26 -- host/target_disconnect.sh@58 -- # wait 1779593 00:21:44.823 [2024-04-24 19:52:26.293295] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:44.823 [2024-04-24 19:52:26.293458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:44.823 [2024-04-24 19:52:26.293485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:44.823 [2024-04-24 19:52:26.293500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:44.823 [2024-04-24 19:52:26.293513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:44.823 [2024-04-24 19:52:26.293550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:44.823 qpair failed and we were unable to recover it. 00:21:44.823 [2024-04-24 19:52:26.303423] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.303626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.303671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.303689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.303702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.303731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.313260] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.313422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.313448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.313464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.313477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.313505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.323274] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.323442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.323469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.323483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.323496] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.323524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.333305] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.333469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.333495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.333511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.333523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.333551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.343339] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.343491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.343517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.343532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.343544] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.343576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.353327] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.353504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.353530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.353544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.353556] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.353584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.363351] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.363507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.363533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.363548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.363560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.363588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.373379] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.373545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.373571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.373585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.373599] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.373626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.383405] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.383562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.383588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.383603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.383614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.383650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.393432] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.393594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.393625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.393654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.393667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.393695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.403491] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.403702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.403729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.403744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.403756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.403786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.413544] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.413700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.084 [2024-04-24 19:52:26.413726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.084 [2024-04-24 19:52:26.413740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.084 [2024-04-24 19:52:26.413752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.084 [2024-04-24 19:52:26.413781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.084 qpair failed and we were unable to recover it. 00:21:45.084 [2024-04-24 19:52:26.423545] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.084 [2024-04-24 19:52:26.423709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.423737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.423751] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.423763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.423791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.433562] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.433734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.433760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.433775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.433787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.433823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.443590] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.443751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.443777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.443792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.443804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.443831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.453653] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.453840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.453867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.453882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.453894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.453921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.463671] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.463827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.463854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.463870] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.463883] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.463912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.473681] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.473843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.473869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.473883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.473896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.473923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.483747] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.483913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.483944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.483960] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.483972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.484000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.493767] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.493922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.493948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.493962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.493974] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.494001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.503790] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.503939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.503965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.503979] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.503992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.504019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.513831] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.514016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.514044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.514059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.514071] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.514100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.523842] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.524007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.524033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.524049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.524066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.524095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.533897] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.534052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.534077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.534103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.534116] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.534143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.543881] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.544029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.544055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.085 [2024-04-24 19:52:26.544070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.085 [2024-04-24 19:52:26.544082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.085 [2024-04-24 19:52:26.544110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.085 qpair failed and we were unable to recover it. 00:21:45.085 [2024-04-24 19:52:26.553910] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.085 [2024-04-24 19:52:26.554067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.085 [2024-04-24 19:52:26.554093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.086 [2024-04-24 19:52:26.554108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.086 [2024-04-24 19:52:26.554120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.086 [2024-04-24 19:52:26.554147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.086 qpair failed and we were unable to recover it. 00:21:45.086 [2024-04-24 19:52:26.564077] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.086 [2024-04-24 19:52:26.564267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.086 [2024-04-24 19:52:26.564293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.086 [2024-04-24 19:52:26.564307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.086 [2024-04-24 19:52:26.564320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.086 [2024-04-24 19:52:26.564347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.086 qpair failed and we were unable to recover it. 00:21:45.086 [2024-04-24 19:52:26.573981] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.086 [2024-04-24 19:52:26.574137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.086 [2024-04-24 19:52:26.574163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.086 [2024-04-24 19:52:26.574177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.086 [2024-04-24 19:52:26.574189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.086 [2024-04-24 19:52:26.574217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.086 qpair failed and we were unable to recover it. 00:21:45.086 [2024-04-24 19:52:26.584025] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.086 [2024-04-24 19:52:26.584182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.086 [2024-04-24 19:52:26.584210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.086 [2024-04-24 19:52:26.584224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.086 [2024-04-24 19:52:26.584236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.086 [2024-04-24 19:52:26.584264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.086 qpair failed and we were unable to recover it. 00:21:45.086 [2024-04-24 19:52:26.594118] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.086 [2024-04-24 19:52:26.594323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.086 [2024-04-24 19:52:26.594348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.086 [2024-04-24 19:52:26.594363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.086 [2024-04-24 19:52:26.594375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.086 [2024-04-24 19:52:26.594403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.086 qpair failed and we were unable to recover it. 00:21:45.347 [2024-04-24 19:52:26.604110] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.347 [2024-04-24 19:52:26.604282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.347 [2024-04-24 19:52:26.604308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.347 [2024-04-24 19:52:26.604323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.347 [2024-04-24 19:52:26.604336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.347 [2024-04-24 19:52:26.604363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.347 qpair failed and we were unable to recover it. 00:21:45.347 [2024-04-24 19:52:26.614110] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.347 [2024-04-24 19:52:26.614266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.347 [2024-04-24 19:52:26.614292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.347 [2024-04-24 19:52:26.614307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.347 [2024-04-24 19:52:26.614325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.347 [2024-04-24 19:52:26.614353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.347 qpair failed and we were unable to recover it. 00:21:45.347 [2024-04-24 19:52:26.624115] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.347 [2024-04-24 19:52:26.624270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.347 [2024-04-24 19:52:26.624296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.347 [2024-04-24 19:52:26.624310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.347 [2024-04-24 19:52:26.624323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.347 [2024-04-24 19:52:26.624351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.347 qpair failed and we were unable to recover it. 00:21:45.347 [2024-04-24 19:52:26.634212] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.347 [2024-04-24 19:52:26.634366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.347 [2024-04-24 19:52:26.634391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.347 [2024-04-24 19:52:26.634406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.347 [2024-04-24 19:52:26.634418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.347 [2024-04-24 19:52:26.634446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.347 qpair failed and we were unable to recover it. 00:21:45.347 [2024-04-24 19:52:26.644177] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.347 [2024-04-24 19:52:26.644362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.347 [2024-04-24 19:52:26.644388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.347 [2024-04-24 19:52:26.644402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.347 [2024-04-24 19:52:26.644414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.347 [2024-04-24 19:52:26.644443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.347 qpair failed and we were unable to recover it. 00:21:45.347 [2024-04-24 19:52:26.654157] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.347 [2024-04-24 19:52:26.654307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.347 [2024-04-24 19:52:26.654332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.347 [2024-04-24 19:52:26.654347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.347 [2024-04-24 19:52:26.654359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.347 [2024-04-24 19:52:26.654388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.347 qpair failed and we were unable to recover it. 00:21:45.347 [2024-04-24 19:52:26.664195] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.347 [2024-04-24 19:52:26.664353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.347 [2024-04-24 19:52:26.664380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.347 [2024-04-24 19:52:26.664395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.347 [2024-04-24 19:52:26.664407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.347 [2024-04-24 19:52:26.664437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.347 qpair failed and we were unable to recover it. 00:21:45.347 [2024-04-24 19:52:26.674227] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.347 [2024-04-24 19:52:26.674387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.347 [2024-04-24 19:52:26.674412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.347 [2024-04-24 19:52:26.674426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.674438] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.674466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.684247] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.684405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.684431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.684446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.684458] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.684486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.694270] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.694431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.694456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.694471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.694483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.694511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.704342] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.704528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.704555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.704578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.704593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.704622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.714379] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.714557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.714583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.714598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.714610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.714644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.724387] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.724552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.724577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.724593] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.724605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.724640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.734431] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.734593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.734619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.734641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.734655] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.734683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.744429] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.744590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.744615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.744636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.744650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.744678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.754461] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.754616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.754649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.754665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.754677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.754706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.764494] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.764675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.764701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.764716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.764728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.764756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.774524] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.774700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.774725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.774740] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.774752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.774780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.784545] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.784707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.784734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.784749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.784761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.784789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.348 [2024-04-24 19:52:26.794584] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.348 [2024-04-24 19:52:26.794754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.348 [2024-04-24 19:52:26.794780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.348 [2024-04-24 19:52:26.794800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.348 [2024-04-24 19:52:26.794813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.348 [2024-04-24 19:52:26.794840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.348 qpair failed and we were unable to recover it. 00:21:45.349 [2024-04-24 19:52:26.804615] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.349 [2024-04-24 19:52:26.804804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.349 [2024-04-24 19:52:26.804830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.349 [2024-04-24 19:52:26.804845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.349 [2024-04-24 19:52:26.804857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.349 [2024-04-24 19:52:26.804885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.349 qpair failed and we were unable to recover it. 00:21:45.349 [2024-04-24 19:52:26.814769] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.349 [2024-04-24 19:52:26.814928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.349 [2024-04-24 19:52:26.814953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.349 [2024-04-24 19:52:26.814968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.349 [2024-04-24 19:52:26.814980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.349 [2024-04-24 19:52:26.815008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.349 qpair failed and we were unable to recover it. 00:21:45.349 [2024-04-24 19:52:26.824656] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.349 [2024-04-24 19:52:26.824809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.349 [2024-04-24 19:52:26.824834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.349 [2024-04-24 19:52:26.824848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.349 [2024-04-24 19:52:26.824860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.349 [2024-04-24 19:52:26.824888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.349 qpair failed and we were unable to recover it. 00:21:45.349 [2024-04-24 19:52:26.834711] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.349 [2024-04-24 19:52:26.834863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.349 [2024-04-24 19:52:26.834889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.349 [2024-04-24 19:52:26.834903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.349 [2024-04-24 19:52:26.834916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.349 [2024-04-24 19:52:26.834943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.349 qpair failed and we were unable to recover it. 00:21:45.349 [2024-04-24 19:52:26.844721] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.349 [2024-04-24 19:52:26.844874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.349 [2024-04-24 19:52:26.844900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.349 [2024-04-24 19:52:26.844915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.349 [2024-04-24 19:52:26.844927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.349 [2024-04-24 19:52:26.844955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.349 qpair failed and we were unable to recover it. 00:21:45.349 [2024-04-24 19:52:26.854788] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.349 [2024-04-24 19:52:26.854987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.349 [2024-04-24 19:52:26.855013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.349 [2024-04-24 19:52:26.855028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.349 [2024-04-24 19:52:26.855041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.349 [2024-04-24 19:52:26.855068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.349 qpair failed and we were unable to recover it. 00:21:45.610 [2024-04-24 19:52:26.864798] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.610 [2024-04-24 19:52:26.865028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.610 [2024-04-24 19:52:26.865054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.610 [2024-04-24 19:52:26.865069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.610 [2024-04-24 19:52:26.865081] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.610 [2024-04-24 19:52:26.865109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.610 qpair failed and we were unable to recover it. 00:21:45.610 [2024-04-24 19:52:26.874807] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.610 [2024-04-24 19:52:26.874968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.610 [2024-04-24 19:52:26.874993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.610 [2024-04-24 19:52:26.875008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.610 [2024-04-24 19:52:26.875020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.610 [2024-04-24 19:52:26.875047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.610 qpair failed and we were unable to recover it. 00:21:45.610 [2024-04-24 19:52:26.884830] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.610 [2024-04-24 19:52:26.884992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.610 [2024-04-24 19:52:26.885018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.610 [2024-04-24 19:52:26.885038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.610 [2024-04-24 19:52:26.885051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.610 [2024-04-24 19:52:26.885079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.610 qpair failed and we were unable to recover it. 00:21:45.610 [2024-04-24 19:52:26.894892] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.610 [2024-04-24 19:52:26.895046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.610 [2024-04-24 19:52:26.895072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.610 [2024-04-24 19:52:26.895087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.610 [2024-04-24 19:52:26.895099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.610 [2024-04-24 19:52:26.895127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.610 qpair failed and we were unable to recover it. 00:21:45.610 [2024-04-24 19:52:26.904907] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.610 [2024-04-24 19:52:26.905059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.610 [2024-04-24 19:52:26.905084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.610 [2024-04-24 19:52:26.905099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.610 [2024-04-24 19:52:26.905111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.610 [2024-04-24 19:52:26.905138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.610 qpair failed and we were unable to recover it. 00:21:45.610 [2024-04-24 19:52:26.914945] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.610 [2024-04-24 19:52:26.915110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:26.915135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:26.915150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:26.915162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:26.915188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:26.924934] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:26.925089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:26.925114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:26.925128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:26.925141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:26.925168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:26.934958] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:26.935115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:26.935140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:26.935155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:26.935167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:26.935195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:26.945002] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:26.945149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:26.945175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:26.945189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:26.945202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:26.945232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:26.955039] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:26.955198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:26.955224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:26.955238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:26.955251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:26.955278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:26.965076] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:26.965283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:26.965310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:26.965327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:26.965340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:26.965369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:26.975066] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:26.975217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:26.975248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:26.975263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:26.975275] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:26.975303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:26.985169] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:26.985389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:26.985415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:26.985430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:26.985442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:26.985469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:26.995174] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:26.995377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:26.995403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:26.995418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:26.995430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:26.995458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:27.005158] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:27.005315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:27.005341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:27.005356] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:27.005368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:27.005396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:27.015186] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:27.015349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:27.015375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:27.015389] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:27.015401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:27.015429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:27.025220] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:27.025368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:27.025395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:27.025409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:27.025421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:27.025449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:27.035258] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:27.035413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:27.035439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:27.035455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:27.035467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.611 [2024-04-24 19:52:27.035495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.611 qpair failed and we were unable to recover it. 00:21:45.611 [2024-04-24 19:52:27.045277] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.611 [2024-04-24 19:52:27.045435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.611 [2024-04-24 19:52:27.045461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.611 [2024-04-24 19:52:27.045476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.611 [2024-04-24 19:52:27.045488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.612 [2024-04-24 19:52:27.045516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.612 qpair failed and we were unable to recover it. 00:21:45.612 [2024-04-24 19:52:27.055334] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.612 [2024-04-24 19:52:27.055496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.612 [2024-04-24 19:52:27.055522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.612 [2024-04-24 19:52:27.055537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.612 [2024-04-24 19:52:27.055549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.612 [2024-04-24 19:52:27.055577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.612 qpair failed and we were unable to recover it. 00:21:45.612 [2024-04-24 19:52:27.065326] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.612 [2024-04-24 19:52:27.065480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.612 [2024-04-24 19:52:27.065511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.612 [2024-04-24 19:52:27.065527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.612 [2024-04-24 19:52:27.065539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.612 [2024-04-24 19:52:27.065566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.612 qpair failed and we were unable to recover it. 00:21:45.612 [2024-04-24 19:52:27.075406] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.612 [2024-04-24 19:52:27.075592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.612 [2024-04-24 19:52:27.075618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.612 [2024-04-24 19:52:27.075646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.612 [2024-04-24 19:52:27.075660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.612 [2024-04-24 19:52:27.075690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.612 qpair failed and we were unable to recover it. 00:21:45.612 [2024-04-24 19:52:27.085699] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.612 [2024-04-24 19:52:27.085873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.612 [2024-04-24 19:52:27.085899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.612 [2024-04-24 19:52:27.085914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.612 [2024-04-24 19:52:27.085927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.612 [2024-04-24 19:52:27.085955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.612 qpair failed and we were unable to recover it. 00:21:45.612 [2024-04-24 19:52:27.095481] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.612 [2024-04-24 19:52:27.095646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.612 [2024-04-24 19:52:27.095672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.612 [2024-04-24 19:52:27.095687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.612 [2024-04-24 19:52:27.095699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.612 [2024-04-24 19:52:27.095728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.612 qpair failed and we were unable to recover it. 00:21:45.612 [2024-04-24 19:52:27.105491] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.612 [2024-04-24 19:52:27.105655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.612 [2024-04-24 19:52:27.105681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.612 [2024-04-24 19:52:27.105695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.612 [2024-04-24 19:52:27.105708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.612 [2024-04-24 19:52:27.105741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.612 qpair failed and we were unable to recover it. 00:21:45.612 [2024-04-24 19:52:27.115563] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.612 [2024-04-24 19:52:27.115731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.612 [2024-04-24 19:52:27.115757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.612 [2024-04-24 19:52:27.115772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.612 [2024-04-24 19:52:27.115784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.612 [2024-04-24 19:52:27.115811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.612 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.125533] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.125697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.125722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.125737] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.125750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.125778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.135538] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.135698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.135723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.135738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.135750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.135778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.145558] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.145715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.145741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.145756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.145768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.145796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.155605] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.155775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.155807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.155822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.155834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.155862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.165640] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.165805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.165830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.165845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.165857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.165885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.175700] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.175858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.175883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.175898] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.175910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.175938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.185709] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.185909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.185935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.185949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.185961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.185988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.195752] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.195948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.195973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.195988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.196000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.196033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.205776] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.205936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.205962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.205976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.205988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.206016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.872 [2024-04-24 19:52:27.215779] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.872 [2024-04-24 19:52:27.215979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.872 [2024-04-24 19:52:27.216004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.872 [2024-04-24 19:52:27.216019] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.872 [2024-04-24 19:52:27.216031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.872 [2024-04-24 19:52:27.216059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.872 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.225801] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.225960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.225985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.226000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.226012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.226040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.235861] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.236035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.236061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.236076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.236089] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.236117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.245889] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.246070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.246101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.246117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.246130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.246157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.255945] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.256100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.256126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.256140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.256153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.256181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.265955] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.266111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.266136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.266151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.266163] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.266191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.276000] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.276165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.276190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.276205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.276217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.276245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.285968] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.286128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.286155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.286169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.286187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.286215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.296006] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.296171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.296196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.296211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.296223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.296252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.306019] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.306199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.306225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.306240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.306252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.306280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.316069] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.316238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.316264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.316279] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.316291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.316319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.326138] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.326297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.326323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.326338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.326350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.326379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.336117] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.336286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.336311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.336326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.336339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.336366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.346143] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.346300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.346326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.346341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.873 [2024-04-24 19:52:27.346353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.873 [2024-04-24 19:52:27.346380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.873 qpair failed and we were unable to recover it. 00:21:45.873 [2024-04-24 19:52:27.356166] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.873 [2024-04-24 19:52:27.356339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.873 [2024-04-24 19:52:27.356365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.873 [2024-04-24 19:52:27.356379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.874 [2024-04-24 19:52:27.356391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.874 [2024-04-24 19:52:27.356419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.874 qpair failed and we were unable to recover it. 00:21:45.874 [2024-04-24 19:52:27.366184] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.874 [2024-04-24 19:52:27.366345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.874 [2024-04-24 19:52:27.366371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.874 [2024-04-24 19:52:27.366386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.874 [2024-04-24 19:52:27.366399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.874 [2024-04-24 19:52:27.366426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.874 qpair failed and we were unable to recover it. 00:21:45.874 [2024-04-24 19:52:27.376209] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:45.874 [2024-04-24 19:52:27.376361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:45.874 [2024-04-24 19:52:27.376386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:45.874 [2024-04-24 19:52:27.376401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:45.874 [2024-04-24 19:52:27.376419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:45.874 [2024-04-24 19:52:27.376448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:45.874 qpair failed and we were unable to recover it. 00:21:46.134 [2024-04-24 19:52:27.386268] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.386428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.386454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.386469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.386481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.386509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.396317] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.396473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.396499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.396514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.396526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.396553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.406328] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.406515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.406541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.406555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.406567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.406596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.416365] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.416549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.416575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.416589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.416601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.416635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.426403] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.426612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.426645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.426661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.426673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.426701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.436445] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.436661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.436687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.436702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.436714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.436742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.446461] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.446656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.446683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.446699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.446711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.446740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.456461] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.456657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.456685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.456702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.456715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.456746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.466517] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.466675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.466703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.466717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.466735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.466764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.476544] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.476747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.476772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.476787] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.476800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.476828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.486561] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.486738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.486764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.486779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.486791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.135 [2024-04-24 19:52:27.486819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.135 qpair failed and we were unable to recover it. 00:21:46.135 [2024-04-24 19:52:27.496574] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.135 [2024-04-24 19:52:27.496747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.135 [2024-04-24 19:52:27.496774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.135 [2024-04-24 19:52:27.496788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.135 [2024-04-24 19:52:27.496800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.496828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.506612] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.506797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.506823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.506838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.506850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.506879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.516659] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.516839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.516865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.516880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.516892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.516920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.526670] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.526853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.526879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.526894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.526906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.526934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.536695] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.536865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.536891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.536905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.536917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.536945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.546712] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.546867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.546893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.546908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.546920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.546948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.556770] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.556975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.557001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.557021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.557033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.557061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.566784] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.566941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.566966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.566981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.566993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.567021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.576839] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.576995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.577021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.577035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.577047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.577075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.586833] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.586989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.587015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.587029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.587041] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.587069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.596886] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.597071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.597098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.597113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.597126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.597154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.606899] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.607107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.607134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.607149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.607161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.607189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.136 qpair failed and we were unable to recover it. 00:21:46.136 [2024-04-24 19:52:27.616912] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.136 [2024-04-24 19:52:27.617061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.136 [2024-04-24 19:52:27.617087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.136 [2024-04-24 19:52:27.617101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.136 [2024-04-24 19:52:27.617114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.136 [2024-04-24 19:52:27.617142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.137 qpair failed and we were unable to recover it. 00:21:46.137 [2024-04-24 19:52:27.626975] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.137 [2024-04-24 19:52:27.627129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.137 [2024-04-24 19:52:27.627155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.137 [2024-04-24 19:52:27.627170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.137 [2024-04-24 19:52:27.627182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.137 [2024-04-24 19:52:27.627209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.137 qpair failed and we were unable to recover it. 00:21:46.137 [2024-04-24 19:52:27.637001] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.137 [2024-04-24 19:52:27.637173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.137 [2024-04-24 19:52:27.637199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.137 [2024-04-24 19:52:27.637213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.137 [2024-04-24 19:52:27.637225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.137 [2024-04-24 19:52:27.637253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.137 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.647012] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.647175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.398 [2024-04-24 19:52:27.647202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.398 [2024-04-24 19:52:27.647223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.398 [2024-04-24 19:52:27.647235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.398 [2024-04-24 19:52:27.647263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.398 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.657089] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.657274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.398 [2024-04-24 19:52:27.657299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.398 [2024-04-24 19:52:27.657314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.398 [2024-04-24 19:52:27.657326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.398 [2024-04-24 19:52:27.657353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.398 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.667074] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.667247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.398 [2024-04-24 19:52:27.667273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.398 [2024-04-24 19:52:27.667287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.398 [2024-04-24 19:52:27.667300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.398 [2024-04-24 19:52:27.667329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.398 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.677124] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.677290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.398 [2024-04-24 19:52:27.677316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.398 [2024-04-24 19:52:27.677331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.398 [2024-04-24 19:52:27.677344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.398 [2024-04-24 19:52:27.677371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.398 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.687181] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.687388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.398 [2024-04-24 19:52:27.687414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.398 [2024-04-24 19:52:27.687429] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.398 [2024-04-24 19:52:27.687441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.398 [2024-04-24 19:52:27.687469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.398 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.697196] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.697351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.398 [2024-04-24 19:52:27.697377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.398 [2024-04-24 19:52:27.697391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.398 [2024-04-24 19:52:27.697404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.398 [2024-04-24 19:52:27.697431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.398 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.707182] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.707337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.398 [2024-04-24 19:52:27.707363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.398 [2024-04-24 19:52:27.707377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.398 [2024-04-24 19:52:27.707390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.398 [2024-04-24 19:52:27.707417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.398 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.717329] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.717486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.398 [2024-04-24 19:52:27.717511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.398 [2024-04-24 19:52:27.717526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.398 [2024-04-24 19:52:27.717538] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.398 [2024-04-24 19:52:27.717565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.398 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.727252] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.727410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.398 [2024-04-24 19:52:27.727436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.398 [2024-04-24 19:52:27.727451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.398 [2024-04-24 19:52:27.727463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.398 [2024-04-24 19:52:27.727490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.398 qpair failed and we were unable to recover it. 00:21:46.398 [2024-04-24 19:52:27.737309] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.398 [2024-04-24 19:52:27.737479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.737510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.737525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.737537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.737565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.747417] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.747573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.747600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.747615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.747634] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.747665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.757357] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.757513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.757538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.757553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.757565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.757594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.767387] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.767543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.767569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.767584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.767596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.767624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.777411] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.777618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.777655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.777670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.777683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.777710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.787465] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.787729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.787756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.787771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.787784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.787812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.797459] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.797615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.797649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.797664] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.797677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.797704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.807476] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.807644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.807670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.807685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.807697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.807725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.817531] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.817721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.817747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.817762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.817774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.817802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.827591] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.827756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.827787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.827806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.827818] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.827846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.837574] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.837770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.837796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.837811] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.837823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.837852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.847588] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.847754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.847781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.847795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.847807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.847835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.857644] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.857799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.857824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.857839] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.857851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.399 [2024-04-24 19:52:27.857879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.399 qpair failed and we were unable to recover it. 00:21:46.399 [2024-04-24 19:52:27.867727] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.399 [2024-04-24 19:52:27.867902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.399 [2024-04-24 19:52:27.867927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.399 [2024-04-24 19:52:27.867941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.399 [2024-04-24 19:52:27.867953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.400 [2024-04-24 19:52:27.867987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.400 qpair failed and we were unable to recover it. 00:21:46.400 [2024-04-24 19:52:27.877710] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.400 [2024-04-24 19:52:27.877898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.400 [2024-04-24 19:52:27.877923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.400 [2024-04-24 19:52:27.877938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.400 [2024-04-24 19:52:27.877950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.400 [2024-04-24 19:52:27.877979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.400 qpair failed and we were unable to recover it. 00:21:46.400 [2024-04-24 19:52:27.887828] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.400 [2024-04-24 19:52:27.888020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.400 [2024-04-24 19:52:27.888047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.400 [2024-04-24 19:52:27.888062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.400 [2024-04-24 19:52:27.888077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.400 [2024-04-24 19:52:27.888107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.400 qpair failed and we were unable to recover it. 00:21:46.400 [2024-04-24 19:52:27.897752] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.400 [2024-04-24 19:52:27.897918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.400 [2024-04-24 19:52:27.897944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.400 [2024-04-24 19:52:27.897959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.400 [2024-04-24 19:52:27.897971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.400 [2024-04-24 19:52:27.897999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.400 qpair failed and we were unable to recover it. 00:21:46.400 [2024-04-24 19:52:27.907761] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.400 [2024-04-24 19:52:27.907950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.400 [2024-04-24 19:52:27.907976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.400 [2024-04-24 19:52:27.907990] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.400 [2024-04-24 19:52:27.908002] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.400 [2024-04-24 19:52:27.908030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.400 qpair failed and we were unable to recover it. 00:21:46.661 [2024-04-24 19:52:27.917821] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.661 [2024-04-24 19:52:27.917982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.661 [2024-04-24 19:52:27.918013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.661 [2024-04-24 19:52:27.918028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.661 [2024-04-24 19:52:27.918040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.661 [2024-04-24 19:52:27.918067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.661 qpair failed and we were unable to recover it. 00:21:46.661 [2024-04-24 19:52:27.927828] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.661 [2024-04-24 19:52:27.927979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.661 [2024-04-24 19:52:27.928004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.661 [2024-04-24 19:52:27.928018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.661 [2024-04-24 19:52:27.928031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.661 [2024-04-24 19:52:27.928058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.661 qpair failed and we were unable to recover it. 00:21:46.661 [2024-04-24 19:52:27.937895] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.661 [2024-04-24 19:52:27.938056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.661 [2024-04-24 19:52:27.938083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.661 [2024-04-24 19:52:27.938102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.661 [2024-04-24 19:52:27.938115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.661 [2024-04-24 19:52:27.938143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.661 qpair failed and we were unable to recover it. 00:21:46.661 [2024-04-24 19:52:27.947912] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.661 [2024-04-24 19:52:27.948102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.661 [2024-04-24 19:52:27.948129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.661 [2024-04-24 19:52:27.948144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.661 [2024-04-24 19:52:27.948156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.661 [2024-04-24 19:52:27.948184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.661 qpair failed and we were unable to recover it. 00:21:46.661 [2024-04-24 19:52:27.957963] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.661 [2024-04-24 19:52:27.958172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.661 [2024-04-24 19:52:27.958198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.661 [2024-04-24 19:52:27.958212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.661 [2024-04-24 19:52:27.958224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.661 [2024-04-24 19:52:27.958258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.661 qpair failed and we were unable to recover it. 00:21:46.661 [2024-04-24 19:52:27.967983] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.661 [2024-04-24 19:52:27.968142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.661 [2024-04-24 19:52:27.968168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.661 [2024-04-24 19:52:27.968183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.661 [2024-04-24 19:52:27.968195] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.661 [2024-04-24 19:52:27.968223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.661 qpair failed and we were unable to recover it. 00:21:46.661 [2024-04-24 19:52:27.977950] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.661 [2024-04-24 19:52:27.978121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.661 [2024-04-24 19:52:27.978146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.661 [2024-04-24 19:52:27.978161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.661 [2024-04-24 19:52:27.978173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.661 [2024-04-24 19:52:27.978201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.661 qpair failed and we were unable to recover it. 00:21:46.661 [2024-04-24 19:52:27.987969] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.661 [2024-04-24 19:52:27.988121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.661 [2024-04-24 19:52:27.988145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.661 [2024-04-24 19:52:27.988160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.661 [2024-04-24 19:52:27.988172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.661 [2024-04-24 19:52:27.988199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.661 qpair failed and we were unable to recover it. 00:21:46.661 [2024-04-24 19:52:27.998014] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:27.998181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:27.998207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:27.998222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:27.998233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:27.998261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.008037] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.008204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.008234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.008250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.008262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.008290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.018059] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.018214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.018239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.018254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.018266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.018294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.028146] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.028302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.028327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.028342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.028354] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.028381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.038119] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.038273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.038298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.038312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.038325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.038353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.048158] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.048316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.048342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.048357] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.048375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.048403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.058188] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.058391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.058416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.058431] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.058443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.058470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.068199] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.068367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.068392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.068407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.068419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.068446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.078272] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.078430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.078455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.078470] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.078482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.078509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.088301] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.088463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.088489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.088503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.088516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.088544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.098341] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.098506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.098531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.098546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.098558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.098586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.108358] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.108523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.108548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.108564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.108576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.108603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.118385] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.118542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.118568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.118582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.118594] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.662 [2024-04-24 19:52:28.118622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.662 qpair failed and we were unable to recover it. 00:21:46.662 [2024-04-24 19:52:28.128371] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.662 [2024-04-24 19:52:28.128533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.662 [2024-04-24 19:52:28.128559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.662 [2024-04-24 19:52:28.128573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.662 [2024-04-24 19:52:28.128586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.663 [2024-04-24 19:52:28.128613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.663 qpair failed and we were unable to recover it. 00:21:46.663 [2024-04-24 19:52:28.138425] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.663 [2024-04-24 19:52:28.138598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.663 [2024-04-24 19:52:28.138623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.663 [2024-04-24 19:52:28.138645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.663 [2024-04-24 19:52:28.138664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.663 [2024-04-24 19:52:28.138692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.663 qpair failed and we were unable to recover it. 00:21:46.663 [2024-04-24 19:52:28.148477] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.663 [2024-04-24 19:52:28.148709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.663 [2024-04-24 19:52:28.148735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.663 [2024-04-24 19:52:28.148749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.663 [2024-04-24 19:52:28.148761] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.663 [2024-04-24 19:52:28.148789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.663 qpair failed and we were unable to recover it. 00:21:46.663 [2024-04-24 19:52:28.158514] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.663 [2024-04-24 19:52:28.158683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.663 [2024-04-24 19:52:28.158709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.663 [2024-04-24 19:52:28.158724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.663 [2024-04-24 19:52:28.158736] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.663 [2024-04-24 19:52:28.158764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.663 qpair failed and we were unable to recover it. 00:21:46.663 [2024-04-24 19:52:28.168519] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.663 [2024-04-24 19:52:28.168673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.663 [2024-04-24 19:52:28.168700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.663 [2024-04-24 19:52:28.168715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.663 [2024-04-24 19:52:28.168727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.663 [2024-04-24 19:52:28.168755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.663 qpair failed and we were unable to recover it. 00:21:46.923 [2024-04-24 19:52:28.178547] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.923 [2024-04-24 19:52:28.178712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.923 [2024-04-24 19:52:28.178738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.923 [2024-04-24 19:52:28.178753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.923 [2024-04-24 19:52:28.178765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.923 [2024-04-24 19:52:28.178793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.923 qpair failed and we were unable to recover it. 00:21:46.923 [2024-04-24 19:52:28.188540] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.923 [2024-04-24 19:52:28.188713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.923 [2024-04-24 19:52:28.188739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.923 [2024-04-24 19:52:28.188753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.923 [2024-04-24 19:52:28.188765] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.923 [2024-04-24 19:52:28.188793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.923 qpair failed and we were unable to recover it. 00:21:46.923 [2024-04-24 19:52:28.198591] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.198757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.198783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.198797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.198809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.198837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.208602] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.208804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.208830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.208845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.208857] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.208884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.218665] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.218872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.218897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.218912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.218924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.218952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.228691] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.228840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.228866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.228880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.228898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.228926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.238686] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.238856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.238881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.238896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.238908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.238935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.248735] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.248890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.248916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.248930] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.248943] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.248970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.258754] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.258908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.258933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.258948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.258960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.258987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.268766] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.268937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.268963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.268978] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.268990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.269017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.278799] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.278954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.278980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.278994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.279006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.279033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.288835] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.288995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.289022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.289038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.289050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.289078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.298875] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.299041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.299067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.299081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.299093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.299121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.308897] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.309060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.309086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.309101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.309112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.309142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.318944] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.319103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.319129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.319149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.319162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.924 [2024-04-24 19:52:28.319189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.924 qpair failed and we were unable to recover it. 00:21:46.924 [2024-04-24 19:52:28.328965] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.924 [2024-04-24 19:52:28.329140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.924 [2024-04-24 19:52:28.329165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.924 [2024-04-24 19:52:28.329180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.924 [2024-04-24 19:52:28.329192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.329220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.338964] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.339149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.339174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.339189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.339201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.339228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.349091] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.349247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.349273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.349287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.349300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.349327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.359035] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.359198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.359224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.359238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.359250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.359278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.369108] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.369310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.369335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.369350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.369362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.369390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.379081] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.379232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.379258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.379273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.379286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.379313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.389104] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.389257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.389283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.389298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.389310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.389338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.399221] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.399380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.399406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.399421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.399433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.399461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.409203] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.409404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.409430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.409450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.409463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.409491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.419237] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.419413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.419438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.419453] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.419464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.419492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:46.925 [2024-04-24 19:52:28.429238] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:46.925 [2024-04-24 19:52:28.429397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:46.925 [2024-04-24 19:52:28.429423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:46.925 [2024-04-24 19:52:28.429438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:46.925 [2024-04-24 19:52:28.429450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:46.925 [2024-04-24 19:52:28.429478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:46.925 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.439293] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.439499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.439525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.439540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.439552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.439582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.449313] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.449481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.449507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.449523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.449535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.449564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.459314] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.459515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.459541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.459556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.459569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.459597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.469321] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.469469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.469495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.469510] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.469522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.469550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.479412] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.479575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.479600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.479615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.479635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.479666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.489424] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.489584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.489610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.489624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.489646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.489675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.499398] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.499568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.499593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.499613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.499626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.499665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.509426] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.509576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.509602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.509616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.509635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.509665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.519468] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.519633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.519659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.519674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.519686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.519713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.529496] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.529662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.529688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.529702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.529715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.529743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.539544] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.539717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.539743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.539757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.539769] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.186 [2024-04-24 19:52:28.539796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.186 qpair failed and we were unable to recover it. 00:21:47.186 [2024-04-24 19:52:28.549540] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.186 [2024-04-24 19:52:28.549713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.186 [2024-04-24 19:52:28.549739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.186 [2024-04-24 19:52:28.549754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.186 [2024-04-24 19:52:28.549766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.549793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.559638] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.559807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.559832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.559847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.559860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.559887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.569618] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.569806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.569831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.569846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.569858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.569886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.579683] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.579852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.579877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.579892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.579904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.579931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.589696] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.589851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.589881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.589897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.589909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.589937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.599748] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.599914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.599940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.599954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.599967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.599994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.609798] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.609965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.609992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.610010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.610022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.610051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.619733] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.619887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.619912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.619927] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.619940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.619968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.629759] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.629915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.629942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.629957] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.629969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.630002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.639822] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.639984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.640010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.640026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.640038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.640067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.649849] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.650010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.650036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.650052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.650064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.650092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.659865] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.660021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.660047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.660062] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.660076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.660104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.669903] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.670061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.670086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.670101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.670113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.670141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.187 [2024-04-24 19:52:28.679930] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.187 [2024-04-24 19:52:28.680135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.187 [2024-04-24 19:52:28.680166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.187 [2024-04-24 19:52:28.680182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.187 [2024-04-24 19:52:28.680194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.187 [2024-04-24 19:52:28.680222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.187 qpair failed and we were unable to recover it. 00:21:47.188 [2024-04-24 19:52:28.689943] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.188 [2024-04-24 19:52:28.690146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.188 [2024-04-24 19:52:28.690173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.188 [2024-04-24 19:52:28.690188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.188 [2024-04-24 19:52:28.690201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.188 [2024-04-24 19:52:28.690231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.188 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.700005] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.448 [2024-04-24 19:52:28.700162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.448 [2024-04-24 19:52:28.700186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.448 [2024-04-24 19:52:28.700200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.448 [2024-04-24 19:52:28.700213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.448 [2024-04-24 19:52:28.700240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.448 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.709993] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.448 [2024-04-24 19:52:28.710157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.448 [2024-04-24 19:52:28.710183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.448 [2024-04-24 19:52:28.710198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.448 [2024-04-24 19:52:28.710210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.448 [2024-04-24 19:52:28.710238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.448 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.720034] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.448 [2024-04-24 19:52:28.720196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.448 [2024-04-24 19:52:28.720220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.448 [2024-04-24 19:52:28.720235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.448 [2024-04-24 19:52:28.720248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.448 [2024-04-24 19:52:28.720283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.448 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.730058] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.448 [2024-04-24 19:52:28.730212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.448 [2024-04-24 19:52:28.730237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.448 [2024-04-24 19:52:28.730252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.448 [2024-04-24 19:52:28.730265] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.448 [2024-04-24 19:52:28.730295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.448 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.740114] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.448 [2024-04-24 19:52:28.740280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.448 [2024-04-24 19:52:28.740305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.448 [2024-04-24 19:52:28.740319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.448 [2024-04-24 19:52:28.740332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.448 [2024-04-24 19:52:28.740361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.448 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.750090] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.448 [2024-04-24 19:52:28.750263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.448 [2024-04-24 19:52:28.750287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.448 [2024-04-24 19:52:28.750302] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.448 [2024-04-24 19:52:28.750315] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.448 [2024-04-24 19:52:28.750343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.448 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.760186] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.448 [2024-04-24 19:52:28.760362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.448 [2024-04-24 19:52:28.760386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.448 [2024-04-24 19:52:28.760401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.448 [2024-04-24 19:52:28.760414] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.448 [2024-04-24 19:52:28.760442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.448 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.770204] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.448 [2024-04-24 19:52:28.770390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.448 [2024-04-24 19:52:28.770435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.448 [2024-04-24 19:52:28.770452] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.448 [2024-04-24 19:52:28.770465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.448 [2024-04-24 19:52:28.770507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.448 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.780191] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.448 [2024-04-24 19:52:28.780347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.448 [2024-04-24 19:52:28.780373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.448 [2024-04-24 19:52:28.780388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.448 [2024-04-24 19:52:28.780400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.448 [2024-04-24 19:52:28.780430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.448 qpair failed and we were unable to recover it. 00:21:47.448 [2024-04-24 19:52:28.790226] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.790405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.790430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.790444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.790457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.790486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.800284] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.800488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.800513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.800533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.800547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.800577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.810316] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.810498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.810539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.810554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.810567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.810615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.820308] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.820466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.820492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.820508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.820521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.820549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.830370] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.830549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.830575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.830590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.830603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.830638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.840398] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.840569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.840595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.840610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.840623] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.840661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.850448] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.850610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.850641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.850658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.850671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.850700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.860530] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.860699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.860730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.860746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.860759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.860787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.870476] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.870645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.870671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.870686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.870699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.870727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.880539] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.880733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.880759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.880774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.880787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.880815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.890531] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.890702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.890738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.890754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.890767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.890795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.900544] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.900706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.900733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.900748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.900766] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.900798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.910584] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.910748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.910774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.449 [2024-04-24 19:52:28.910790] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.449 [2024-04-24 19:52:28.910802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.449 [2024-04-24 19:52:28.910831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.449 qpair failed and we were unable to recover it. 00:21:47.449 [2024-04-24 19:52:28.920618] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.449 [2024-04-24 19:52:28.920787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.449 [2024-04-24 19:52:28.920813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.450 [2024-04-24 19:52:28.920828] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.450 [2024-04-24 19:52:28.920840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.450 [2024-04-24 19:52:28.920868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.450 qpair failed and we were unable to recover it. 00:21:47.450 [2024-04-24 19:52:28.930643] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.450 [2024-04-24 19:52:28.930831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.450 [2024-04-24 19:52:28.930859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.450 [2024-04-24 19:52:28.930877] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.450 [2024-04-24 19:52:28.930890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.450 [2024-04-24 19:52:28.930919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.450 qpair failed and we were unable to recover it. 00:21:47.450 [2024-04-24 19:52:28.940667] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.450 [2024-04-24 19:52:28.940823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.450 [2024-04-24 19:52:28.940849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.450 [2024-04-24 19:52:28.940865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.450 [2024-04-24 19:52:28.940877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.450 [2024-04-24 19:52:28.940906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.450 qpair failed and we were unable to recover it. 00:21:47.450 [2024-04-24 19:52:28.950686] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.450 [2024-04-24 19:52:28.950847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.450 [2024-04-24 19:52:28.950873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.450 [2024-04-24 19:52:28.950889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.450 [2024-04-24 19:52:28.950902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.450 [2024-04-24 19:52:28.950930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.450 qpair failed and we were unable to recover it. 00:21:47.450 [2024-04-24 19:52:28.960879] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.450 [2024-04-24 19:52:28.961055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.450 [2024-04-24 19:52:28.961081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.709 [2024-04-24 19:52:28.961097] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.709 [2024-04-24 19:52:28.961110] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.709 [2024-04-24 19:52:28.961138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.709 qpair failed and we were unable to recover it. 00:21:47.709 [2024-04-24 19:52:28.970753] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.709 [2024-04-24 19:52:28.970906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.709 [2024-04-24 19:52:28.970932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.709 [2024-04-24 19:52:28.970948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.709 [2024-04-24 19:52:28.970961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.709 [2024-04-24 19:52:28.970989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.709 qpair failed and we were unable to recover it. 00:21:47.709 [2024-04-24 19:52:28.980796] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.709 [2024-04-24 19:52:28.980958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.709 [2024-04-24 19:52:28.980983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.709 [2024-04-24 19:52:28.980999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.709 [2024-04-24 19:52:28.981011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.709 [2024-04-24 19:52:28.981040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.709 qpair failed and we were unable to recover it. 00:21:47.709 [2024-04-24 19:52:28.990853] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.709 [2024-04-24 19:52:28.991029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:28.991055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:28.991070] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:28.991088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:28.991117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.000852] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.001012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.001037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.001052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.001065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.001093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.010862] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.011022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.011049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.011064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.011077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.011105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.020891] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.021050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.021076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.021091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.021104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.021133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.030922] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.031071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.031098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.031113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.031125] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.031153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.040972] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.041138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.041163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.041178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.041192] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.041220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.050983] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.051191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.051217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.051233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.051245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.051275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.061037] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.061245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.061271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.061286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.061298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.061327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.071081] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.071243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.071271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.071287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.071300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.071328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.081113] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.081309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.081335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.081355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.081369] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.081397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.091083] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.091261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.091288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.091304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.091317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.091345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.101124] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.101278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.101304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.710 [2024-04-24 19:52:29.101320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.710 [2024-04-24 19:52:29.101332] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.710 [2024-04-24 19:52:29.101361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.710 qpair failed and we were unable to recover it. 00:21:47.710 [2024-04-24 19:52:29.111137] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.710 [2024-04-24 19:52:29.111291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.710 [2024-04-24 19:52:29.111317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.111333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.111346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.111373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.121298] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.121464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.121490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.121506] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.121518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.121546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.131239] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.131437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.131477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.131492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.131504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.131546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.141235] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.141391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.141417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.141432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.141444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.141472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.151271] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.151423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.151449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.151464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.151477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.151505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.161316] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.161482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.161509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.161524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.161536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.161565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.171346] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.171527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.171567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.171588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.171601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.171653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.181369] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.181595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.181622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.181645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.181658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.181689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.191377] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.191529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.191556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.191571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.191584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.191612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.201430] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.201589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.201615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.201639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.201654] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.201683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.211452] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.211647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.211674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.211690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.211702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.211731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.711 [2024-04-24 19:52:29.221532] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.711 [2024-04-24 19:52:29.221740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.711 [2024-04-24 19:52:29.221767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.711 [2024-04-24 19:52:29.221782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.711 [2024-04-24 19:52:29.221795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.711 [2024-04-24 19:52:29.221823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.711 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.231529] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.231704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.231731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.231747] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.231759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.231787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.241564] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.241733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.241759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.241774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.241787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.241816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.251581] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.251753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.251779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.251795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.251808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.251836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.261601] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.261777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.261804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.261825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.261838] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.261866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.271648] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.271809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.271836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.271851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.271863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.271891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.281703] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.281858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.281884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.281900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.281913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.281942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.291701] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.291856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.291882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.291897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.291910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.291939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.301741] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.301945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.301971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.301987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.301999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.302027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.311856] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.312008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.312033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.312049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.312061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.312089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.321804] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.971 [2024-04-24 19:52:29.321958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.971 [2024-04-24 19:52:29.321984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.971 [2024-04-24 19:52:29.321999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.971 [2024-04-24 19:52:29.322012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.971 [2024-04-24 19:52:29.322040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.971 qpair failed and we were unable to recover it. 00:21:47.971 [2024-04-24 19:52:29.331847] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.332008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.332034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.332050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.332063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.332091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.341842] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.341998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.342024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.342040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.342052] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.342080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.351911] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.352073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.352104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.352120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.352133] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.352161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.361939] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.362101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.362128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.362144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.362156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.362184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.371937] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.372097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.372123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.372138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.372151] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.372179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.381963] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.382121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.382148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.382163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.382176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.382205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.391975] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.392146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.392172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.392188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.392200] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.392228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.402055] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.402258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.402284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.402300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.402312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.402339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.412053] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.412214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.412240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.412255] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.412267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.412295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.422071] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.422266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.422292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.422307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.422320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.422347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.432149] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.432341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.432369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.432385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.432401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.432431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.442127] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.442286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.442317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.442334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.442347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.442375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.452146] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.452307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.452333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.972 [2024-04-24 19:52:29.452349] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.972 [2024-04-24 19:52:29.452362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.972 [2024-04-24 19:52:29.452390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.972 qpair failed and we were unable to recover it. 00:21:47.972 [2024-04-24 19:52:29.462195] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.972 [2024-04-24 19:52:29.462393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.972 [2024-04-24 19:52:29.462420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.973 [2024-04-24 19:52:29.462435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.973 [2024-04-24 19:52:29.462448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.973 [2024-04-24 19:52:29.462476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.973 qpair failed and we were unable to recover it. 00:21:47.973 [2024-04-24 19:52:29.472268] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.973 [2024-04-24 19:52:29.472457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.973 [2024-04-24 19:52:29.472484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.973 [2024-04-24 19:52:29.472500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.973 [2024-04-24 19:52:29.472512] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.973 [2024-04-24 19:52:29.472540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.973 qpair failed and we were unable to recover it. 00:21:47.973 [2024-04-24 19:52:29.482294] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:47.973 [2024-04-24 19:52:29.482457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:47.973 [2024-04-24 19:52:29.482483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:47.973 [2024-04-24 19:52:29.482498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:47.973 [2024-04-24 19:52:29.482511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:47.973 [2024-04-24 19:52:29.482545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:47.973 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.492268] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.492422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.492449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.492464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.492477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.492505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.502343] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.502505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.502532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.502552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.502565] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.502594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.512325] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.512479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.512506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.512521] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.512535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.512564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.522415] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.522582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.522608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.522624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.522645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.522674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.532384] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.532560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.532592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.532608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.532621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.532657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.542403] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.542573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.542600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.542620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.542641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.542671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.552438] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.552621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.552660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.552677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.552690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.552719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.562473] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.562643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.562670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.562685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.562698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.562727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.572542] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.572767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.572794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.572810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.572822] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.572855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.582516] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.582690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.582715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.582731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.582744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.582772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.592554] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.592739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.592766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.592781] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.592794] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.592822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.602635] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.602804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.602830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.602846] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.232 [2024-04-24 19:52:29.602859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.232 [2024-04-24 19:52:29.602887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.232 qpair failed and we were unable to recover it. 00:21:48.232 [2024-04-24 19:52:29.612661] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.232 [2024-04-24 19:52:29.612851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.232 [2024-04-24 19:52:29.612877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.232 [2024-04-24 19:52:29.612893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.612906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.612939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.622635] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.622800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.622831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.233 [2024-04-24 19:52:29.622847] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.622860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.622890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.632666] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.632852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.632878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.233 [2024-04-24 19:52:29.632893] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.632905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.632933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.642748] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.642947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.642974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.233 [2024-04-24 19:52:29.642989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.643003] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.643031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.652740] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.652931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.652971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.233 [2024-04-24 19:52:29.652986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.652998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.653040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.662752] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.662911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.662937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.233 [2024-04-24 19:52:29.662951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.662969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.662997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.672773] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.672945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.672971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.233 [2024-04-24 19:52:29.672986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.672999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.673026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.682807] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.682975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.683001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.233 [2024-04-24 19:52:29.683016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.683029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.683057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.692836] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.692992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.693018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.233 [2024-04-24 19:52:29.693033] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.693046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.693075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.702840] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.702996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.703023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.233 [2024-04-24 19:52:29.703039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.233 [2024-04-24 19:52:29.703051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.233 [2024-04-24 19:52:29.703080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.233 qpair failed and we were unable to recover it. 00:21:48.233 [2024-04-24 19:52:29.712873] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.233 [2024-04-24 19:52:29.713044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.233 [2024-04-24 19:52:29.713071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.234 [2024-04-24 19:52:29.713086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.234 [2024-04-24 19:52:29.713099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.234 [2024-04-24 19:52:29.713127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.234 qpair failed and we were unable to recover it. 00:21:48.234 [2024-04-24 19:52:29.722909] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.234 [2024-04-24 19:52:29.723071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.234 [2024-04-24 19:52:29.723096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.234 [2024-04-24 19:52:29.723111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.234 [2024-04-24 19:52:29.723124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.234 [2024-04-24 19:52:29.723151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.234 qpair failed and we were unable to recover it. 00:21:48.234 [2024-04-24 19:52:29.732924] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.234 [2024-04-24 19:52:29.733082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.234 [2024-04-24 19:52:29.733109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.234 [2024-04-24 19:52:29.733127] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.234 [2024-04-24 19:52:29.733140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.234 [2024-04-24 19:52:29.733168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.234 qpair failed and we were unable to recover it. 00:21:48.234 [2024-04-24 19:52:29.742980] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.234 [2024-04-24 19:52:29.743134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.234 [2024-04-24 19:52:29.743160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.234 [2024-04-24 19:52:29.743176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.234 [2024-04-24 19:52:29.743189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.234 [2024-04-24 19:52:29.743216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.234 qpair failed and we were unable to recover it. 00:21:48.492 [2024-04-24 19:52:29.753016] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.492 [2024-04-24 19:52:29.753178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.492 [2024-04-24 19:52:29.753204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.492 [2024-04-24 19:52:29.753220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.492 [2024-04-24 19:52:29.753239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.753267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.763047] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.763214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.763240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.763256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.763269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.763297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.773078] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.773292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.773318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.773334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.773346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.773385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.783091] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.783258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.783284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.783299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.783311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.783348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.793172] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.793339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.793364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.793379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.793392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.793422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.803120] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.803287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.803313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.803329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.803341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.803369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.813161] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.813320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.813347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.813362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.813375] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.813403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.823282] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.823442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.823467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.823482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.823495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.823523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.833176] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.833331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.833356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.833371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.833384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.833412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.843275] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.843443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.843468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.843483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.843502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.843532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.853269] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.853437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.853463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.853479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.853492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.853524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.863299] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.863488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.863515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.863534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.863549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.863578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.873353] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.873521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.873548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.873567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.873580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.873609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.883334] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.883494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.493 [2024-04-24 19:52:29.883520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.493 [2024-04-24 19:52:29.883535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.493 [2024-04-24 19:52:29.883549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.493 [2024-04-24 19:52:29.883577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.493 qpair failed and we were unable to recover it. 00:21:48.493 [2024-04-24 19:52:29.893363] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.493 [2024-04-24 19:52:29.893526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.893553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.893568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.893581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.893609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.903477] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.903638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.903665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.903680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.903692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.903721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.913404] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.913556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.913582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.913597] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.913609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.913646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.923449] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.923607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.923641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.923658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.923672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.923702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.933458] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.933614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.933646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.933667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.933680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.933709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.943525] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.943693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.943719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.943733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.943746] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.943774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.953606] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.953778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.953806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.953824] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.953837] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.953867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.963573] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.963780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.963806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.963821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.963835] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.963863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.973603] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.973773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.973799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.973814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.973827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.973855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.983620] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.983837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.983866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.983885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.983899] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.983928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:29.993683] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:29.993839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:29.993864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:29.993879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:29.993892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:29.993920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.494 [2024-04-24 19:52:30.003719] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.494 [2024-04-24 19:52:30.003916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.494 [2024-04-24 19:52:30.003942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.494 [2024-04-24 19:52:30.003956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.494 [2024-04-24 19:52:30.003969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.494 [2024-04-24 19:52:30.003998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.494 qpair failed and we were unable to recover it. 00:21:48.754 [2024-04-24 19:52:30.013798] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.754 [2024-04-24 19:52:30.014041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.754 [2024-04-24 19:52:30.014072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.754 [2024-04-24 19:52:30.014088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.754 [2024-04-24 19:52:30.014102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.754 [2024-04-24 19:52:30.014132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.754 qpair failed and we were unable to recover it. 00:21:48.754 [2024-04-24 19:52:30.023736] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.754 [2024-04-24 19:52:30.023898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.754 [2024-04-24 19:52:30.023923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.754 [2024-04-24 19:52:30.023945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.754 [2024-04-24 19:52:30.023958] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.754 [2024-04-24 19:52:30.023987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.754 qpair failed and we were unable to recover it. 00:21:48.754 [2024-04-24 19:52:30.033769] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.754 [2024-04-24 19:52:30.033968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.754 [2024-04-24 19:52:30.033998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.754 [2024-04-24 19:52:30.034015] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.754 [2024-04-24 19:52:30.034029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.754 [2024-04-24 19:52:30.034058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.754 qpair failed and we were unable to recover it. 00:21:48.754 [2024-04-24 19:52:30.043837] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.754 [2024-04-24 19:52:30.043997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.754 [2024-04-24 19:52:30.044021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.754 [2024-04-24 19:52:30.044036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.754 [2024-04-24 19:52:30.044049] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.754 [2024-04-24 19:52:30.044079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.754 qpair failed and we were unable to recover it. 00:21:48.754 [2024-04-24 19:52:30.053921] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.754 [2024-04-24 19:52:30.054087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.754 [2024-04-24 19:52:30.054113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.754 [2024-04-24 19:52:30.054128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.754 [2024-04-24 19:52:30.054141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.754 [2024-04-24 19:52:30.054171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.754 qpair failed and we were unable to recover it. 00:21:48.754 [2024-04-24 19:52:30.063846] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.754 [2024-04-24 19:52:30.064015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.754 [2024-04-24 19:52:30.064042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.754 [2024-04-24 19:52:30.064058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.754 [2024-04-24 19:52:30.064072] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.754 [2024-04-24 19:52:30.064100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.754 qpair failed and we were unable to recover it. 00:21:48.754 [2024-04-24 19:52:30.073890] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.754 [2024-04-24 19:52:30.074056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.754 [2024-04-24 19:52:30.074082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.754 [2024-04-24 19:52:30.074096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.754 [2024-04-24 19:52:30.074109] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.754 [2024-04-24 19:52:30.074137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.754 qpair failed and we were unable to recover it. 00:21:48.754 [2024-04-24 19:52:30.083920] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.754 [2024-04-24 19:52:30.084085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.754 [2024-04-24 19:52:30.084110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.754 [2024-04-24 19:52:30.084126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.754 [2024-04-24 19:52:30.084138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.754 [2024-04-24 19:52:30.084167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.754 qpair failed and we were unable to recover it. 00:21:48.754 [2024-04-24 19:52:30.093970] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.754 [2024-04-24 19:52:30.094139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.754 [2024-04-24 19:52:30.094165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.754 [2024-04-24 19:52:30.094181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.094194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.094223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.103954] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.104165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.104194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.104213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.104226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.104264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.113975] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.114129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.114159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.114175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.114188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.114216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.124045] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.124205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.124230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.124244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.124257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.124286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.134058] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.134215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.134241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.134256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.134269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.134297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.144063] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.144230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.144256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.144271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.144283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.144311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.154087] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.154243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.154269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.154283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.154296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.154324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.164131] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.164291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.164316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.164331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.164344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.164373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.174167] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.174327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.174352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.174368] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.174380] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.174409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.184223] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.184382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.184408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.184423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.184435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.184464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.194200] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.194363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.194389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.194404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.194416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.194445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.204264] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.204426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.204456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.204473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.204485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.204513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.214257] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.214414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.214439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.214454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.214467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.214496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.224287] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.755 [2024-04-24 19:52:30.224440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.755 [2024-04-24 19:52:30.224465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.755 [2024-04-24 19:52:30.224480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.755 [2024-04-24 19:52:30.224492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.755 [2024-04-24 19:52:30.224521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.755 qpair failed and we were unable to recover it. 00:21:48.755 [2024-04-24 19:52:30.234404] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.756 [2024-04-24 19:52:30.234556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.756 [2024-04-24 19:52:30.234581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.756 [2024-04-24 19:52:30.234596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.756 [2024-04-24 19:52:30.234609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.756 [2024-04-24 19:52:30.234645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.756 qpair failed and we were unable to recover it. 00:21:48.756 [2024-04-24 19:52:30.244401] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.756 [2024-04-24 19:52:30.244599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.756 [2024-04-24 19:52:30.244624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.756 [2024-04-24 19:52:30.244648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.756 [2024-04-24 19:52:30.244662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.756 [2024-04-24 19:52:30.244697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.756 qpair failed and we were unable to recover it. 00:21:48.756 [2024-04-24 19:52:30.254364] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.756 [2024-04-24 19:52:30.254523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.756 [2024-04-24 19:52:30.254549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.756 [2024-04-24 19:52:30.254564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.756 [2024-04-24 19:52:30.254577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.756 [2024-04-24 19:52:30.254606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.756 qpair failed and we were unable to recover it. 00:21:48.756 [2024-04-24 19:52:30.264424] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:48.756 [2024-04-24 19:52:30.264608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:48.756 [2024-04-24 19:52:30.264640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:48.756 [2024-04-24 19:52:30.264657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:48.756 [2024-04-24 19:52:30.264670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:48.756 [2024-04-24 19:52:30.264698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:48.756 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.274434] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.274588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.274613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.274633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.274649] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.274678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.284479] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.284649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.284674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.284689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.284703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.284731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.294492] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.294661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.294693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.294711] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.294724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.294753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.304604] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.304777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.304803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.304819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.304831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.304860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.314541] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.314695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.314720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.314735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.314748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.314776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.324588] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.324756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.324781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.324797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.324809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.324838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.334702] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.334860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.334885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.334901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.334914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.334949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.344668] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.344843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.344868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.344883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.344895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.344924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.354705] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.354863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.354888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.354903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.354917] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.354946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.364707] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.364860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.364884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.364899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.015 [2024-04-24 19:52:30.364911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.015 [2024-04-24 19:52:30.364939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.015 qpair failed and we were unable to recover it. 00:21:49.015 [2024-04-24 19:52:30.374733] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.015 [2024-04-24 19:52:30.374904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.015 [2024-04-24 19:52:30.374932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.015 [2024-04-24 19:52:30.374948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.374961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.374990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.384780] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.384984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.385014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.385032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.385046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.385074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.394768] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.394925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.394950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.394965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.394977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.395005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.404789] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.404953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.404977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.404992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.405006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.405035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.414939] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.415104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.415130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.415146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.415158] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.415187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.424924] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.425089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.425116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.425134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.425153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.425184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.434895] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.435080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.435107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.435123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.435136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.435166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.444925] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.445127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.445152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.445168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.445181] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.445209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.455059] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.455218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.455244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.455259] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.455272] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.455300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.464953] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.465103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.465129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.465144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.465156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.465185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.474993] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.475159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.475185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.475200] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.475212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.475241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.485043] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.485249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.485278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.485294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.485308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.485339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.495139] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.495325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.495351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.495366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.495379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.495408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.505068] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.016 [2024-04-24 19:52:30.505221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.016 [2024-04-24 19:52:30.505245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.016 [2024-04-24 19:52:30.505260] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.016 [2024-04-24 19:52:30.505273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.016 [2024-04-24 19:52:30.505301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.016 qpair failed and we were unable to recover it. 00:21:49.016 [2024-04-24 19:52:30.515106] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.017 [2024-04-24 19:52:30.515296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.017 [2024-04-24 19:52:30.515321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.017 [2024-04-24 19:52:30.515336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.017 [2024-04-24 19:52:30.515355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.017 [2024-04-24 19:52:30.515385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.017 qpair failed and we were unable to recover it. 00:21:49.017 [2024-04-24 19:52:30.525207] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.017 [2024-04-24 19:52:30.525371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.017 [2024-04-24 19:52:30.525396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.017 [2024-04-24 19:52:30.525411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.017 [2024-04-24 19:52:30.525424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.017 [2024-04-24 19:52:30.525452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.017 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.535222] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.276 [2024-04-24 19:52:30.535399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.276 [2024-04-24 19:52:30.535424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.276 [2024-04-24 19:52:30.535439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.276 [2024-04-24 19:52:30.535452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.276 [2024-04-24 19:52:30.535481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.276 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.545240] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.276 [2024-04-24 19:52:30.545390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.276 [2024-04-24 19:52:30.545415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.276 [2024-04-24 19:52:30.545430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.276 [2024-04-24 19:52:30.545444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.276 [2024-04-24 19:52:30.545473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.276 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.555252] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.276 [2024-04-24 19:52:30.555402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.276 [2024-04-24 19:52:30.555428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.276 [2024-04-24 19:52:30.555442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.276 [2024-04-24 19:52:30.555455] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.276 [2024-04-24 19:52:30.555485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.276 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.565284] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.276 [2024-04-24 19:52:30.565455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.276 [2024-04-24 19:52:30.565480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.276 [2024-04-24 19:52:30.565495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.276 [2024-04-24 19:52:30.565507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.276 [2024-04-24 19:52:30.565536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.276 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.575350] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.276 [2024-04-24 19:52:30.575508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.276 [2024-04-24 19:52:30.575534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.276 [2024-04-24 19:52:30.575549] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.276 [2024-04-24 19:52:30.575562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.276 [2024-04-24 19:52:30.575590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.276 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.585365] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.276 [2024-04-24 19:52:30.585524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.276 [2024-04-24 19:52:30.585550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.276 [2024-04-24 19:52:30.585565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.276 [2024-04-24 19:52:30.585578] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.276 [2024-04-24 19:52:30.585608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.276 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.595392] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.276 [2024-04-24 19:52:30.595571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.276 [2024-04-24 19:52:30.595597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.276 [2024-04-24 19:52:30.595611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.276 [2024-04-24 19:52:30.595624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.276 [2024-04-24 19:52:30.595662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.276 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.605422] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.276 [2024-04-24 19:52:30.605580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.276 [2024-04-24 19:52:30.605605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.276 [2024-04-24 19:52:30.605620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.276 [2024-04-24 19:52:30.605646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.276 [2024-04-24 19:52:30.605676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.276 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.615428] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.276 [2024-04-24 19:52:30.615581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.276 [2024-04-24 19:52:30.615606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.276 [2024-04-24 19:52:30.615621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.276 [2024-04-24 19:52:30.615641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.276 [2024-04-24 19:52:30.615671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.276 qpair failed and we were unable to recover it. 00:21:49.276 [2024-04-24 19:52:30.625452] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.625607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.625640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.625657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.625669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.625698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.635488] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.635664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.635690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.635705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.635718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.635746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.645523] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.645689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.645714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.645730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.645743] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.645774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.655541] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.655739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.655767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.655783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.655795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.655824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.665567] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.665729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.665756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.665772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.665784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.665813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.675649] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.675809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.675835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.675851] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.675864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.675892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.685650] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.685810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.685837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.685852] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.685865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.685896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.695669] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.695834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.695859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.695880] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.695894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.695922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.705717] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.705916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.705943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.705958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.705971] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.705999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.715736] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.715897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.715922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.715937] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.715950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.715980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.725884] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.726064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.726090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.726105] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.726118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.726146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.735770] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.735932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.735959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.735974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.735988] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.736017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.745859] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.746032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.746058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.746073] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.746086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.277 [2024-04-24 19:52:30.746115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.277 qpair failed and we were unable to recover it. 00:21:49.277 [2024-04-24 19:52:30.755813] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.277 [2024-04-24 19:52:30.755966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.277 [2024-04-24 19:52:30.755993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.277 [2024-04-24 19:52:30.756008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.277 [2024-04-24 19:52:30.756021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.278 [2024-04-24 19:52:30.756048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.278 qpair failed and we were unable to recover it. 00:21:49.278 [2024-04-24 19:52:30.765848] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.278 [2024-04-24 19:52:30.766014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.278 [2024-04-24 19:52:30.766040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.278 [2024-04-24 19:52:30.766055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.278 [2024-04-24 19:52:30.766067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.278 [2024-04-24 19:52:30.766095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.278 qpair failed and we were unable to recover it. 00:21:49.278 [2024-04-24 19:52:30.775917] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.278 [2024-04-24 19:52:30.776076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.278 [2024-04-24 19:52:30.776103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.278 [2024-04-24 19:52:30.776119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.278 [2024-04-24 19:52:30.776131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.278 [2024-04-24 19:52:30.776159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.278 qpair failed and we were unable to recover it. 00:21:49.278 [2024-04-24 19:52:30.785935] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.278 [2024-04-24 19:52:30.786135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.278 [2024-04-24 19:52:30.786162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.278 [2024-04-24 19:52:30.786182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.278 [2024-04-24 19:52:30.786196] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.278 [2024-04-24 19:52:30.786224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.278 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.795982] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.796141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.796180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.796195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.796207] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.796236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.806004] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.806207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.806234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.806250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.806266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.806296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.816030] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.816232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.816260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.816275] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.816288] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.816316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.826086] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.826247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.826274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.826289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.826302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.826330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.836065] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.836220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.836247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.836262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.836274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.836304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.846093] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.846296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.846323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.846338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.846350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.846379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.856117] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.856296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.856332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.856347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.856360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.856388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.866196] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.866397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.866424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.866440] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.866454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.866482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.876200] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.876377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.876402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.876422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.876436] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.876463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.886233] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.886439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.886465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.886481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.886493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.886522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.896240] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.896401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.896428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.896444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.896457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.896485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.906301] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.906461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.906488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.906503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.906516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.906544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.916320] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.916487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.916513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.916529] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.916542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.916570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.926360] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.926574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.926601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.926616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.926636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.926666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.936346] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.936509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.936536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.936552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.936564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.936592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.946402] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.946621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.946655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.946671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.946684] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.946712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.956445] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.956647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.956676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.956692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.956704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.956736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.966487] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.966661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.966692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.966709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.966722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.966751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.976489] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.976659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.976685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.976701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.976714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.537 [2024-04-24 19:52:30.976742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.537 qpair failed and we were unable to recover it. 00:21:49.537 [2024-04-24 19:52:30.986486] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.537 [2024-04-24 19:52:30.986657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.537 [2024-04-24 19:52:30.986683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.537 [2024-04-24 19:52:30.986699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.537 [2024-04-24 19:52:30.986711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.538 [2024-04-24 19:52:30.986740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.538 qpair failed and we were unable to recover it. 00:21:49.538 [2024-04-24 19:52:30.996503] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.538 [2024-04-24 19:52:30.996672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.538 [2024-04-24 19:52:30.996699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.538 [2024-04-24 19:52:30.996714] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.538 [2024-04-24 19:52:30.996727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.538 [2024-04-24 19:52:30.996755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.538 qpair failed and we were unable to recover it. 00:21:49.538 [2024-04-24 19:52:31.006560] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.538 [2024-04-24 19:52:31.006733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.538 [2024-04-24 19:52:31.006759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.538 [2024-04-24 19:52:31.006775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.538 [2024-04-24 19:52:31.006787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.538 [2024-04-24 19:52:31.006821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.538 qpair failed and we were unable to recover it. 00:21:49.538 [2024-04-24 19:52:31.016679] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.538 [2024-04-24 19:52:31.016847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.538 [2024-04-24 19:52:31.016874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.538 [2024-04-24 19:52:31.016890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.538 [2024-04-24 19:52:31.016902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.538 [2024-04-24 19:52:31.016930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.538 qpair failed and we were unable to recover it. 00:21:49.538 [2024-04-24 19:52:31.026611] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.538 [2024-04-24 19:52:31.026776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.538 [2024-04-24 19:52:31.026803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.538 [2024-04-24 19:52:31.026818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.538 [2024-04-24 19:52:31.026831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.538 [2024-04-24 19:52:31.026859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.538 qpair failed and we were unable to recover it. 00:21:49.538 [2024-04-24 19:52:31.036661] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.538 [2024-04-24 19:52:31.036865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.538 [2024-04-24 19:52:31.036891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.538 [2024-04-24 19:52:31.036915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.538 [2024-04-24 19:52:31.036928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.538 [2024-04-24 19:52:31.036955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.538 qpair failed and we were unable to recover it. 00:21:49.538 [2024-04-24 19:52:31.046688] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.538 [2024-04-24 19:52:31.046879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.538 [2024-04-24 19:52:31.046905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.538 [2024-04-24 19:52:31.046920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.538 [2024-04-24 19:52:31.046933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.538 [2024-04-24 19:52:31.046961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.538 qpair failed and we were unable to recover it. 00:21:49.797 [2024-04-24 19:52:31.056698] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.797 [2024-04-24 19:52:31.056869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.797 [2024-04-24 19:52:31.056901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.797 [2024-04-24 19:52:31.056917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.797 [2024-04-24 19:52:31.056929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.797 [2024-04-24 19:52:31.056974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.797 qpair failed and we were unable to recover it. 00:21:49.797 [2024-04-24 19:52:31.066736] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.797 [2024-04-24 19:52:31.066923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.797 [2024-04-24 19:52:31.066949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.797 [2024-04-24 19:52:31.066964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.797 [2024-04-24 19:52:31.066977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.797 [2024-04-24 19:52:31.067005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.797 qpair failed and we were unable to recover it. 00:21:49.797 [2024-04-24 19:52:31.076864] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.797 [2024-04-24 19:52:31.077028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.797 [2024-04-24 19:52:31.077054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.797 [2024-04-24 19:52:31.077069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.797 [2024-04-24 19:52:31.077082] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.797 [2024-04-24 19:52:31.077110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.797 qpair failed and we were unable to recover it. 00:21:49.797 [2024-04-24 19:52:31.086790] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.797 [2024-04-24 19:52:31.087001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.797 [2024-04-24 19:52:31.087027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.797 [2024-04-24 19:52:31.087042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.797 [2024-04-24 19:52:31.087054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.797 [2024-04-24 19:52:31.087082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.797 qpair failed and we were unable to recover it. 00:21:49.797 [2024-04-24 19:52:31.096829] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.797 [2024-04-24 19:52:31.096989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.797 [2024-04-24 19:52:31.097015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.797 [2024-04-24 19:52:31.097031] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.797 [2024-04-24 19:52:31.097043] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.797 [2024-04-24 19:52:31.097077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.797 qpair failed and we were unable to recover it. 00:21:49.797 [2024-04-24 19:52:31.106849] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.797 [2024-04-24 19:52:31.107012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.797 [2024-04-24 19:52:31.107038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.797 [2024-04-24 19:52:31.107053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.797 [2024-04-24 19:52:31.107066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.797 [2024-04-24 19:52:31.107094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.797 qpair failed and we were unable to recover it. 00:21:49.797 [2024-04-24 19:52:31.116846] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.797 [2024-04-24 19:52:31.117016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.797 [2024-04-24 19:52:31.117041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.797 [2024-04-24 19:52:31.117056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.797 [2024-04-24 19:52:31.117069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.117096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.126977] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.127176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.127201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.127216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.127229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.127257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.136949] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.137108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.137135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.137151] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.137164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.137194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.146952] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.147112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.147143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.147159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.147172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.147201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.157019] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.157187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.157214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.157229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.157241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.157270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.167038] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.167200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.167226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.167242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.167254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.167283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.177077] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.177249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.177275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.177291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.177303] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.177332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.187069] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.187234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.187261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.187276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.187289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.187322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.197238] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.197421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.197447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.197462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.197474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.197502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.207191] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.207360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.207385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.207401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.207413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.207441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.217178] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.217337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.217364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.217380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.217392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.217420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.227216] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.227383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.227409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.227424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.227437] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.227465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.237256] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.237413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.237445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.237461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.237473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.237501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.247281] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.798 [2024-04-24 19:52:31.247448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.798 [2024-04-24 19:52:31.247474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.798 [2024-04-24 19:52:31.247490] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.798 [2024-04-24 19:52:31.247503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.798 [2024-04-24 19:52:31.247530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.798 qpair failed and we were unable to recover it. 00:21:49.798 [2024-04-24 19:52:31.257353] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.799 [2024-04-24 19:52:31.257521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.799 [2024-04-24 19:52:31.257547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.799 [2024-04-24 19:52:31.257563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.799 [2024-04-24 19:52:31.257576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.799 [2024-04-24 19:52:31.257604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.799 qpair failed and we were unable to recover it. 00:21:49.799 [2024-04-24 19:52:31.267336] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.799 [2024-04-24 19:52:31.267496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.799 [2024-04-24 19:52:31.267522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.799 [2024-04-24 19:52:31.267537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.799 [2024-04-24 19:52:31.267549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.799 [2024-04-24 19:52:31.267577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.799 qpair failed and we were unable to recover it. 00:21:49.799 [2024-04-24 19:52:31.277386] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.799 [2024-04-24 19:52:31.277571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.799 [2024-04-24 19:52:31.277597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.799 [2024-04-24 19:52:31.277612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.799 [2024-04-24 19:52:31.277637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.799 [2024-04-24 19:52:31.277669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.799 qpair failed and we were unable to recover it. 00:21:49.799 [2024-04-24 19:52:31.287394] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.799 [2024-04-24 19:52:31.287562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.799 [2024-04-24 19:52:31.287588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.799 [2024-04-24 19:52:31.287603] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.799 [2024-04-24 19:52:31.287616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.799 [2024-04-24 19:52:31.287653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.799 qpair failed and we were unable to recover it. 00:21:49.799 [2024-04-24 19:52:31.297416] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.799 [2024-04-24 19:52:31.297590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.799 [2024-04-24 19:52:31.297636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.799 [2024-04-24 19:52:31.297654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.799 [2024-04-24 19:52:31.297670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.799 [2024-04-24 19:52:31.297698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.799 qpair failed and we were unable to recover it. 00:21:49.799 [2024-04-24 19:52:31.307458] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:49.799 [2024-04-24 19:52:31.307678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:49.799 [2024-04-24 19:52:31.307704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:49.799 [2024-04-24 19:52:31.307719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:49.799 [2024-04-24 19:52:31.307732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:49.799 [2024-04-24 19:52:31.307760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:49.799 qpair failed and we were unable to recover it. 00:21:50.060 [2024-04-24 19:52:31.317496] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.060 [2024-04-24 19:52:31.317669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.060 [2024-04-24 19:52:31.317695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.060 [2024-04-24 19:52:31.317710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.060 [2024-04-24 19:52:31.317723] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.060 [2024-04-24 19:52:31.317751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.060 qpair failed and we were unable to recover it. 00:21:50.060 [2024-04-24 19:52:31.327497] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.060 [2024-04-24 19:52:31.327683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.060 [2024-04-24 19:52:31.327709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.060 [2024-04-24 19:52:31.327725] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.060 [2024-04-24 19:52:31.327737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.060 [2024-04-24 19:52:31.327765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.060 qpair failed and we were unable to recover it. 00:21:50.060 [2024-04-24 19:52:31.337538] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.060 [2024-04-24 19:52:31.337717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.060 [2024-04-24 19:52:31.337743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.060 [2024-04-24 19:52:31.337758] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.060 [2024-04-24 19:52:31.337771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.337799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.347592] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.347797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.347823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.347838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.347851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.347879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.357670] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.357831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.357856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.357871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.357884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.357912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.367661] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.367820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.367844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.367860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.367878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.367906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.377636] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.377840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.377867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.377882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.377895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.377923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.387670] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.387843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.387869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.387884] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.387896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.387924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.397702] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.397865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.397891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.397906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.397919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.397958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.407794] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.407967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.407993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.408009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.408021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.408049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.417751] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.417957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.417984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.418000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.418013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.418042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.427798] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.427995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.428024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.428040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.428053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.428083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.437806] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.437961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.437987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.438001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.438014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.438042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.447872] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.448026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.448051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.448065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.448079] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.448107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.457898] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.458064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.458089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.458110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.458124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.458167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.467914] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.061 [2024-04-24 19:52:31.468067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.061 [2024-04-24 19:52:31.468092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.061 [2024-04-24 19:52:31.468108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.061 [2024-04-24 19:52:31.468121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.061 [2024-04-24 19:52:31.468149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.061 qpair failed and we were unable to recover it. 00:21:50.061 [2024-04-24 19:52:31.477915] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.478099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.478125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.478140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.478153] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.478181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.062 [2024-04-24 19:52:31.488020] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.488182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.488207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.488221] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.488234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.488262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.062 [2024-04-24 19:52:31.498000] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.498158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.498184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.498199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.498212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.498240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.062 [2024-04-24 19:52:31.508027] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.508178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.508204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.508220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.508233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.508263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.062 [2024-04-24 19:52:31.518024] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.518194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.518220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.518235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.518248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.518277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.062 [2024-04-24 19:52:31.528097] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.528288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.528314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.528330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.528347] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.528379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.062 [2024-04-24 19:52:31.538088] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.538253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.538279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.538294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.538307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.538336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.062 [2024-04-24 19:52:31.548227] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.548382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.548407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.548428] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.548442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.548471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.062 [2024-04-24 19:52:31.558202] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.558413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.558438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.558454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.558468] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.558497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.062 [2024-04-24 19:52:31.568200] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.062 [2024-04-24 19:52:31.568422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.062 [2024-04-24 19:52:31.568447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.062 [2024-04-24 19:52:31.568462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.062 [2024-04-24 19:52:31.568475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.062 [2024-04-24 19:52:31.568504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.062 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.578224] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.327 [2024-04-24 19:52:31.578410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.327 [2024-04-24 19:52:31.578440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.327 [2024-04-24 19:52:31.578457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.327 [2024-04-24 19:52:31.578471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.327 [2024-04-24 19:52:31.578500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.327 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.588283] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.327 [2024-04-24 19:52:31.588486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.327 [2024-04-24 19:52:31.588511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.327 [2024-04-24 19:52:31.588527] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.327 [2024-04-24 19:52:31.588540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.327 [2024-04-24 19:52:31.588570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.327 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.598283] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.327 [2024-04-24 19:52:31.598467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.327 [2024-04-24 19:52:31.598493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.327 [2024-04-24 19:52:31.598512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.327 [2024-04-24 19:52:31.598525] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.327 [2024-04-24 19:52:31.598555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.327 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.608335] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.327 [2024-04-24 19:52:31.608498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.327 [2024-04-24 19:52:31.608523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.327 [2024-04-24 19:52:31.608538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.327 [2024-04-24 19:52:31.608551] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.327 [2024-04-24 19:52:31.608580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.327 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.618328] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.327 [2024-04-24 19:52:31.618536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.327 [2024-04-24 19:52:31.618561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.327 [2024-04-24 19:52:31.618577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.327 [2024-04-24 19:52:31.618590] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.327 [2024-04-24 19:52:31.618619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.327 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.628403] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.327 [2024-04-24 19:52:31.628557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.327 [2024-04-24 19:52:31.628582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.327 [2024-04-24 19:52:31.628598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.327 [2024-04-24 19:52:31.628610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.327 [2024-04-24 19:52:31.628669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.327 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.638420] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.327 [2024-04-24 19:52:31.638618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.327 [2024-04-24 19:52:31.638652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.327 [2024-04-24 19:52:31.638674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.327 [2024-04-24 19:52:31.638688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.327 [2024-04-24 19:52:31.638720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.327 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.648424] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.327 [2024-04-24 19:52:31.648577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.327 [2024-04-24 19:52:31.648603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.327 [2024-04-24 19:52:31.648618] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.327 [2024-04-24 19:52:31.648641] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.327 [2024-04-24 19:52:31.648672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.327 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.658436] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.327 [2024-04-24 19:52:31.658600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.327 [2024-04-24 19:52:31.658625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.327 [2024-04-24 19:52:31.658648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.327 [2024-04-24 19:52:31.658662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.327 [2024-04-24 19:52:31.658691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.327 qpair failed and we were unable to recover it. 00:21:50.327 [2024-04-24 19:52:31.668499] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.668662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.668687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.668703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.668716] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.668744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.678508] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.678674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.678700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.678715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.678727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.678756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.688563] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.688743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.688769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.688784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.688797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.688825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.698554] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.698762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.698789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.698804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.698816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.698844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.708568] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.708723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.708748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.708763] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.708776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.708804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.718644] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.718811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.718838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.718853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.718866] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.718901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.728663] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.728845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.728877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.728894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.728907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.728936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.738682] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.738887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.738914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.738929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.738942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.738972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.748710] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.748885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.748910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.748924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.748937] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.748966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.758751] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.758909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.758934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.758949] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.758961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.758989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.768812] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.768992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.769017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.769032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.769044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.769079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.778801] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.778954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.778980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.778995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.328 [2024-04-24 19:52:31.779008] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.328 [2024-04-24 19:52:31.779037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.328 qpair failed and we were unable to recover it. 00:21:50.328 [2024-04-24 19:52:31.788825] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.328 [2024-04-24 19:52:31.788995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.328 [2024-04-24 19:52:31.789020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.328 [2024-04-24 19:52:31.789035] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.329 [2024-04-24 19:52:31.789047] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.329 [2024-04-24 19:52:31.789076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.329 qpair failed and we were unable to recover it. 00:21:50.329 [2024-04-24 19:52:31.798868] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.329 [2024-04-24 19:52:31.799032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.329 [2024-04-24 19:52:31.799057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.329 [2024-04-24 19:52:31.799072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.329 [2024-04-24 19:52:31.799086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.329 [2024-04-24 19:52:31.799115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.329 qpair failed and we were unable to recover it. 00:21:50.329 [2024-04-24 19:52:31.808900] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.329 [2024-04-24 19:52:31.809110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.329 [2024-04-24 19:52:31.809137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.329 [2024-04-24 19:52:31.809152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.329 [2024-04-24 19:52:31.809165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.329 [2024-04-24 19:52:31.809193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.329 qpair failed and we were unable to recover it. 00:21:50.329 [2024-04-24 19:52:31.818942] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.329 [2024-04-24 19:52:31.819127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.329 [2024-04-24 19:52:31.819159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.329 [2024-04-24 19:52:31.819176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.329 [2024-04-24 19:52:31.819188] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.329 [2024-04-24 19:52:31.819217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.329 qpair failed and we were unable to recover it. 00:21:50.329 [2024-04-24 19:52:31.828957] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.329 [2024-04-24 19:52:31.829116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.329 [2024-04-24 19:52:31.829144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.329 [2024-04-24 19:52:31.829159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.329 [2024-04-24 19:52:31.829172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.329 [2024-04-24 19:52:31.829200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.329 qpair failed and we were unable to recover it. 00:21:50.329 [2024-04-24 19:52:31.838972] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.329 [2024-04-24 19:52:31.839127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.329 [2024-04-24 19:52:31.839152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.329 [2024-04-24 19:52:31.839168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.329 [2024-04-24 19:52:31.839180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.329 [2024-04-24 19:52:31.839209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.329 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.849007] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.849170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.849196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.849211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.849224] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.849252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.859002] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.859162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.859187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.859203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.859216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.859249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.869029] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.869185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.869211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.869227] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.869240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.869268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.879071] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.879222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.879249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.879264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.879278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.879307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.889133] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.889294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.889320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.889336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.889349] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.889377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.899123] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.899287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.899313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.899329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.899342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.899370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.909195] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.909392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.909425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.909445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.909460] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.909489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.919180] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.919381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.919409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.919426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.919439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.919468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.929371] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.929553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.929579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.929594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.929607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.929647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.939273] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.939445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.939471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.939486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.939498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.589 [2024-04-24 19:52:31.939527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.589 qpair failed and we were unable to recover it. 00:21:50.589 [2024-04-24 19:52:31.949298] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.589 [2024-04-24 19:52:31.949474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.589 [2024-04-24 19:52:31.949499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.589 [2024-04-24 19:52:31.949513] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.589 [2024-04-24 19:52:31.949526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:31.949561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:31.959405] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:31.959573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:31.959599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:31.959614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:31.959635] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:31.959667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:31.969331] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:31.969489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:31.969516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:31.969532] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:31.969546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:31.969574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:31.979370] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:31.979526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:31.979553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:31.979568] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:31.979581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:31.979610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:31.989404] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:31.989562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:31.989589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:31.989604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:31.989617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:31.989654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:31.999404] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:31.999555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:31.999586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:31.999601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:31.999614] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:31.999652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:32.009469] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:32.009624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:32.009656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:32.009671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:32.009685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:32.009714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:32.019491] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:32.019655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:32.019682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:32.019698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:32.019710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:32.019739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:32.029486] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:32.029647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:32.029674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:32.029690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:32.029702] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:32.029731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:32.039618] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:32.039796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:32.039822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:32.039837] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:32.039856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:32.039885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:32.049573] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:32.049735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:32.049760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:32.049775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:32.049787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:32.049817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:32.059582] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:32.059744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:32.059771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:32.059786] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:32.059799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:32.059827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:32.069599] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:32.069767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:32.069793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:32.069808] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:32.069821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:32.069849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:32.079692] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:32.079853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.590 [2024-04-24 19:52:32.079880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.590 [2024-04-24 19:52:32.079895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.590 [2024-04-24 19:52:32.079908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.590 [2024-04-24 19:52:32.079936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.590 qpair failed and we were unable to recover it. 00:21:50.590 [2024-04-24 19:52:32.089681] ctrlr.c: 718:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:50.590 [2024-04-24 19:52:32.089854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:50.591 [2024-04-24 19:52:32.089880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:50.591 [2024-04-24 19:52:32.089895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:50.591 [2024-04-24 19:52:32.089908] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16cff30 00:21:50.591 [2024-04-24 19:52:32.089944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:50.591 qpair failed and we were unable to recover it. 00:21:50.591 [2024-04-24 19:52:32.090085] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:21:50.591 A controller has encountered a failure and is being reset. 00:21:50.850 Controller properly reset. 00:21:50.850 Initializing NVMe Controllers 00:21:50.850 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:50.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:50.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:50.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:50.850 Initialization complete. Launching workers. 00:21:50.850 Starting thread on core 1 00:21:50.850 Starting thread on core 2 00:21:50.850 Starting thread on core 3 00:21:50.850 Starting thread on core 0 00:21:50.850 19:52:32 -- host/target_disconnect.sh@59 -- # sync 00:21:50.850 00:21:50.850 real 0m10.829s 00:21:50.850 user 0m17.382s 00:21:50.850 sys 0m5.489s 00:21:50.850 19:52:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:50.850 19:52:32 -- common/autotest_common.sh@10 -- # set +x 00:21:50.850 ************************************ 00:21:50.850 END TEST nvmf_target_disconnect_tc2 00:21:50.850 ************************************ 00:21:50.850 19:52:32 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:21:50.850 19:52:32 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:50.850 19:52:32 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:21:50.850 19:52:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:50.850 19:52:32 -- nvmf/common.sh@117 -- # sync 00:21:50.850 19:52:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.850 19:52:32 -- nvmf/common.sh@120 -- # set +e 00:21:50.850 19:52:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.850 19:52:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.850 rmmod nvme_tcp 00:21:50.850 rmmod nvme_fabrics 00:21:50.850 rmmod nvme_keyring 00:21:50.850 19:52:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.850 19:52:32 -- nvmf/common.sh@124 -- # set -e 00:21:50.850 19:52:32 -- nvmf/common.sh@125 -- # return 0 00:21:50.850 19:52:32 -- nvmf/common.sh@478 -- # '[' -n 1780101 ']' 00:21:50.850 19:52:32 -- nvmf/common.sh@479 -- # killprocess 1780101 00:21:50.850 19:52:32 -- common/autotest_common.sh@936 -- # '[' -z 1780101 ']' 00:21:50.850 19:52:32 -- common/autotest_common.sh@940 -- # kill -0 1780101 00:21:50.850 19:52:32 -- common/autotest_common.sh@941 -- # uname 00:21:50.850 19:52:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:50.850 19:52:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1780101 00:21:50.850 19:52:32 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:21:50.850 19:52:32 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:21:50.850 19:52:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1780101' 00:21:50.850 killing process with pid 1780101 00:21:50.850 19:52:32 -- common/autotest_common.sh@955 -- # kill 1780101 00:21:50.850 19:52:32 -- common/autotest_common.sh@960 -- # wait 1780101 00:21:51.109 19:52:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:51.109 19:52:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:51.109 19:52:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:51.109 19:52:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.109 19:52:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.109 19:52:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.109 19:52:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.109 19:52:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.653 19:52:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.653 00:21:53.653 real 0m15.737s 00:21:53.653 user 0m43.519s 00:21:53.653 sys 0m7.537s 00:21:53.653 19:52:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:53.653 19:52:34 -- common/autotest_common.sh@10 -- # set +x 00:21:53.653 ************************************ 00:21:53.653 END TEST nvmf_target_disconnect 00:21:53.653 ************************************ 00:21:53.653 19:52:34 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:21:53.653 19:52:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:53.653 19:52:34 -- common/autotest_common.sh@10 -- # set +x 00:21:53.653 19:52:34 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:21:53.653 00:21:53.653 real 15m37.513s 00:21:53.653 user 36m12.552s 00:21:53.653 sys 4m15.931s 00:21:53.653 19:52:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:53.653 19:52:34 -- common/autotest_common.sh@10 -- # set +x 00:21:53.653 ************************************ 00:21:53.653 END TEST nvmf_tcp 00:21:53.653 ************************************ 00:21:53.653 19:52:34 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:21:53.653 19:52:34 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:53.653 19:52:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:53.653 19:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:53.653 19:52:34 -- common/autotest_common.sh@10 -- # set +x 00:21:53.653 ************************************ 00:21:53.653 START TEST spdkcli_nvmf_tcp 00:21:53.653 ************************************ 00:21:53.653 19:52:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:53.653 * Looking for test storage... 00:21:53.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:21:53.653 19:52:34 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:21:53.653 19:52:34 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:21:53.653 19:52:34 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:21:53.654 19:52:34 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.654 19:52:34 -- nvmf/common.sh@7 -- # uname -s 00:21:53.654 19:52:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.654 19:52:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.654 19:52:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.654 19:52:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.654 19:52:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.654 19:52:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.654 19:52:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.654 19:52:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.654 19:52:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.654 19:52:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.654 19:52:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.654 19:52:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.654 19:52:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.654 19:52:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.654 19:52:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.654 19:52:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.654 19:52:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.654 19:52:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.654 19:52:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.654 19:52:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.654 19:52:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.654 19:52:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.654 19:52:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.654 19:52:34 -- paths/export.sh@5 -- # export PATH 00:21:53.654 19:52:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.654 19:52:34 -- nvmf/common.sh@47 -- # : 0 00:21:53.654 19:52:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.654 19:52:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.654 19:52:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.654 19:52:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.654 19:52:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.654 19:52:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.654 19:52:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.654 19:52:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.654 19:52:34 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:53.654 19:52:34 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:53.654 19:52:34 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:53.654 19:52:34 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:53.654 19:52:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:53.654 19:52:34 -- common/autotest_common.sh@10 -- # set +x 00:21:53.654 19:52:34 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:53.654 19:52:34 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1781305 00:21:53.654 19:52:34 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:53.654 19:52:34 -- spdkcli/common.sh@34 -- # waitforlisten 1781305 00:21:53.654 19:52:34 -- common/autotest_common.sh@817 -- # '[' -z 1781305 ']' 00:21:53.654 19:52:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.654 19:52:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:53.654 19:52:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.654 19:52:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:53.654 19:52:34 -- common/autotest_common.sh@10 -- # set +x 00:21:53.654 [2024-04-24 19:52:34.873709] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:21:53.654 [2024-04-24 19:52:34.873806] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781305 ] 00:21:53.654 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.654 [2024-04-24 19:52:34.934145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:53.654 [2024-04-24 19:52:35.045650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.654 [2024-04-24 19:52:35.045660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.654 19:52:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:53.654 19:52:35 -- common/autotest_common.sh@850 -- # return 0 00:21:53.654 19:52:35 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:53.654 19:52:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:53.654 19:52:35 -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 19:52:35 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:53.912 19:52:35 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:53.912 19:52:35 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:53.912 19:52:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:53.912 19:52:35 -- common/autotest_common.sh@10 -- # set +x 00:21:53.912 19:52:35 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:53.912 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:53.912 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:53.912 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:53.912 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:53.912 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:53.912 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:53.912 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:53.912 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:53.912 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:53.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:53.912 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:53.912 ' 00:21:54.170 [2024-04-24 19:52:35.575965] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:56.699 [2024-04-24 19:52:37.733221] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.632 [2024-04-24 19:52:38.957479] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:00.160 [2024-04-24 19:52:41.236623] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:02.058 [2024-04-24 19:52:43.210891] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:03.485 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:03.485 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:03.485 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:03.485 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:03.485 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:03.485 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:03.485 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:03.485 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:03.485 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:03.485 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:03.485 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:03.485 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:03.485 19:52:44 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:03.485 19:52:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:03.485 19:52:44 -- common/autotest_common.sh@10 -- # set +x 00:22:03.485 19:52:44 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:03.485 19:52:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:03.485 19:52:44 -- common/autotest_common.sh@10 -- # set +x 00:22:03.485 19:52:44 -- spdkcli/nvmf.sh@69 -- # check_match 00:22:03.485 19:52:44 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:04.050 19:52:45 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:04.050 19:52:45 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:04.050 19:52:45 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:04.050 19:52:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:04.050 19:52:45 -- common/autotest_common.sh@10 -- # set +x 00:22:04.050 19:52:45 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:04.050 19:52:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:04.050 19:52:45 -- common/autotest_common.sh@10 -- # set +x 00:22:04.050 19:52:45 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:04.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:04.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:04.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:04.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:04.050 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:04.050 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:04.051 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:04.051 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:04.051 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:04.051 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:04.051 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:04.051 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:04.051 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:04.051 ' 00:22:09.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:09.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:09.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:09.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:09.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:09.315 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:09.315 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:09.315 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:09.315 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:09.316 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:09.316 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:09.316 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:09.316 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:09.316 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:09.316 19:52:50 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:09.316 19:52:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:09.316 19:52:50 -- common/autotest_common.sh@10 -- # set +x 00:22:09.316 19:52:50 -- spdkcli/nvmf.sh@90 -- # killprocess 1781305 00:22:09.316 19:52:50 -- common/autotest_common.sh@936 -- # '[' -z 1781305 ']' 00:22:09.316 19:52:50 -- common/autotest_common.sh@940 -- # kill -0 1781305 00:22:09.316 19:52:50 -- common/autotest_common.sh@941 -- # uname 00:22:09.316 19:52:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:09.316 19:52:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1781305 00:22:09.316 19:52:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:09.316 19:52:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:09.316 19:52:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1781305' 00:22:09.316 killing process with pid 1781305 00:22:09.316 19:52:50 -- common/autotest_common.sh@955 -- # kill 1781305 00:22:09.316 [2024-04-24 19:52:50.655868] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:09.316 19:52:50 -- common/autotest_common.sh@960 -- # wait 1781305 00:22:09.574 19:52:50 -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:09.574 19:52:50 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:09.574 19:52:50 -- spdkcli/common.sh@13 -- # '[' -n 1781305 ']' 00:22:09.574 19:52:50 -- spdkcli/common.sh@14 -- # killprocess 1781305 00:22:09.574 19:52:50 -- common/autotest_common.sh@936 -- # '[' -z 1781305 ']' 00:22:09.574 19:52:50 -- common/autotest_common.sh@940 -- # kill -0 1781305 00:22:09.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1781305) - No such process 00:22:09.574 19:52:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1781305 is not found' 00:22:09.574 Process with pid 1781305 is not found 00:22:09.574 19:52:50 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:09.574 19:52:50 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:09.574 19:52:50 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:09.574 00:22:09.574 real 0m16.171s 00:22:09.574 user 0m34.146s 00:22:09.574 sys 0m0.845s 00:22:09.574 19:52:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:09.574 19:52:50 -- common/autotest_common.sh@10 -- # set +x 00:22:09.574 ************************************ 00:22:09.574 END TEST spdkcli_nvmf_tcp 00:22:09.574 ************************************ 00:22:09.574 19:52:50 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:09.574 19:52:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:09.574 19:52:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:09.574 19:52:50 -- common/autotest_common.sh@10 -- # set +x 00:22:09.574 ************************************ 00:22:09.574 START TEST nvmf_identify_passthru 00:22:09.574 ************************************ 00:22:09.574 19:52:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:09.834 * Looking for test storage... 00:22:09.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:09.834 19:52:51 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.834 19:52:51 -- nvmf/common.sh@7 -- # uname -s 00:22:09.834 19:52:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.834 19:52:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.834 19:52:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.834 19:52:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.834 19:52:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.834 19:52:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.834 19:52:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.834 19:52:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.834 19:52:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.834 19:52:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.834 19:52:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.834 19:52:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.834 19:52:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.834 19:52:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.834 19:52:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.834 19:52:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.834 19:52:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.834 19:52:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.834 19:52:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.834 19:52:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.834 19:52:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.834 19:52:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.834 19:52:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.834 19:52:51 -- paths/export.sh@5 -- # export PATH 00:22:09.834 19:52:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.834 19:52:51 -- nvmf/common.sh@47 -- # : 0 00:22:09.834 19:52:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:09.834 19:52:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:09.834 19:52:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.834 19:52:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.834 19:52:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.834 19:52:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:09.834 19:52:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:09.834 19:52:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:09.834 19:52:51 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.834 19:52:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.834 19:52:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.834 19:52:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.834 19:52:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.834 19:52:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.834 19:52:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.834 19:52:51 -- paths/export.sh@5 -- # export PATH 00:22:09.834 19:52:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.834 19:52:51 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:09.834 19:52:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:09.834 19:52:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.834 19:52:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:09.834 19:52:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:09.834 19:52:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:09.834 19:52:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.834 19:52:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:09.834 19:52:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.834 19:52:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:09.834 19:52:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:09.835 19:52:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:09.835 19:52:51 -- common/autotest_common.sh@10 -- # set +x 00:22:11.740 19:52:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:11.740 19:52:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:11.740 19:52:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:11.740 19:52:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:11.740 19:52:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:11.740 19:52:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:11.740 19:52:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:11.740 19:52:52 -- nvmf/common.sh@295 -- # net_devs=() 00:22:11.740 19:52:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:11.740 19:52:52 -- nvmf/common.sh@296 -- # e810=() 00:22:11.740 19:52:52 -- nvmf/common.sh@296 -- # local -ga e810 00:22:11.740 19:52:52 -- nvmf/common.sh@297 -- # x722=() 00:22:11.740 19:52:52 -- nvmf/common.sh@297 -- # local -ga x722 00:22:11.740 19:52:52 -- nvmf/common.sh@298 -- # mlx=() 00:22:11.740 19:52:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:11.740 19:52:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.740 19:52:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:11.740 19:52:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:11.740 19:52:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:11.740 19:52:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:11.740 19:52:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:11.741 19:52:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:11.741 19:52:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.741 19:52:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:11.741 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:11.741 19:52:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:11.741 19:52:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:11.741 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:11.741 19:52:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:11.741 19:52:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.741 19:52:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.741 19:52:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:11.741 19:52:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.741 19:52:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:11.741 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:11.741 19:52:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.741 19:52:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:11.741 19:52:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.741 19:52:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:11.741 19:52:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.741 19:52:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:11.741 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:11.741 19:52:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.741 19:52:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:11.741 19:52:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:11.741 19:52:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:11.741 19:52:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:11.741 19:52:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.741 19:52:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.741 19:52:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.741 19:52:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:11.741 19:52:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.741 19:52:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.741 19:52:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:11.741 19:52:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.741 19:52:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.741 19:52:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:11.741 19:52:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:11.741 19:52:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.741 19:52:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.741 19:52:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.741 19:52:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.741 19:52:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:11.741 19:52:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.741 19:52:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.741 19:52:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.741 19:52:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:11.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:22:11.741 00:22:11.741 --- 10.0.0.2 ping statistics --- 00:22:11.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.741 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:11.741 19:52:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:22:11.741 00:22:11.741 --- 10.0.0.1 ping statistics --- 00:22:11.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.741 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:11.741 19:52:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.741 19:52:53 -- nvmf/common.sh@411 -- # return 0 00:22:11.741 19:52:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:11.741 19:52:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.741 19:52:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:11.741 19:52:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:11.741 19:52:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.741 19:52:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:11.741 19:52:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:11.741 19:52:53 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:11.741 19:52:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:11.741 19:52:53 -- common/autotest_common.sh@10 -- # set +x 00:22:11.741 19:52:53 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:11.741 19:52:53 -- common/autotest_common.sh@1510 -- # bdfs=() 00:22:11.741 19:52:53 -- common/autotest_common.sh@1510 -- # local bdfs 00:22:11.741 19:52:53 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:22:11.741 19:52:53 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:22:11.741 19:52:53 -- common/autotest_common.sh@1499 -- # bdfs=() 00:22:11.741 19:52:53 -- common/autotest_common.sh@1499 -- # local bdfs 00:22:11.741 19:52:53 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:11.741 19:52:53 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:11.741 19:52:53 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:22:11.741 19:52:53 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:22:11.741 19:52:53 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:88:00.0 00:22:11.741 19:52:53 -- common/autotest_common.sh@1513 -- # echo 0000:88:00.0 00:22:11.741 19:52:53 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:22:11.741 19:52:53 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:22:11.741 19:52:53 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:22:11.741 19:52:53 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:11.741 19:52:53 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:11.741 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.930 19:52:57 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:22:15.930 19:52:57 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:22:15.930 19:52:57 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:15.930 19:52:57 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:15.930 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.117 19:53:01 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:22:20.117 19:53:01 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:22:20.117 19:53:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:20.117 19:53:01 -- common/autotest_common.sh@10 -- # set +x 00:22:20.375 19:53:01 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:22:20.375 19:53:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:20.375 19:53:01 -- common/autotest_common.sh@10 -- # set +x 00:22:20.375 19:53:01 -- target/identify_passthru.sh@31 -- # nvmfpid=1785929 00:22:20.375 19:53:01 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:20.375 19:53:01 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.375 19:53:01 -- target/identify_passthru.sh@35 -- # waitforlisten 1785929 00:22:20.375 19:53:01 -- common/autotest_common.sh@817 -- # '[' -z 1785929 ']' 00:22:20.375 19:53:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.375 19:53:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:20.375 19:53:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.375 19:53:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:20.375 19:53:01 -- common/autotest_common.sh@10 -- # set +x 00:22:20.375 [2024-04-24 19:53:01.696137] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:22:20.375 [2024-04-24 19:53:01.696232] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.375 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.375 [2024-04-24 19:53:01.762212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:20.376 [2024-04-24 19:53:01.877038] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.376 [2024-04-24 19:53:01.877091] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.376 [2024-04-24 19:53:01.877105] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.376 [2024-04-24 19:53:01.877124] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.376 [2024-04-24 19:53:01.877136] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.376 [2024-04-24 19:53:01.877205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.376 [2024-04-24 19:53:01.877265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.376 [2024-04-24 19:53:01.877287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.376 [2024-04-24 19:53:01.877290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.635 19:53:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:20.635 19:53:01 -- common/autotest_common.sh@850 -- # return 0 00:22:20.635 19:53:01 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:22:20.635 19:53:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.635 19:53:01 -- common/autotest_common.sh@10 -- # set +x 00:22:20.635 INFO: Log level set to 20 00:22:20.635 INFO: Requests: 00:22:20.635 { 00:22:20.635 "jsonrpc": "2.0", 00:22:20.635 "method": "nvmf_set_config", 00:22:20.635 "id": 1, 00:22:20.635 "params": { 00:22:20.635 "admin_cmd_passthru": { 00:22:20.635 "identify_ctrlr": true 00:22:20.635 } 00:22:20.635 } 00:22:20.635 } 00:22:20.635 00:22:20.635 INFO: response: 00:22:20.635 { 00:22:20.635 "jsonrpc": "2.0", 00:22:20.635 "id": 1, 00:22:20.635 "result": true 00:22:20.635 } 00:22:20.635 00:22:20.635 19:53:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.635 19:53:01 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:22:20.635 19:53:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.635 19:53:01 -- common/autotest_common.sh@10 -- # set +x 00:22:20.635 INFO: Setting log level to 20 00:22:20.635 INFO: Setting log level to 20 00:22:20.635 INFO: Log level set to 20 00:22:20.635 INFO: Log level set to 20 00:22:20.635 INFO: Requests: 00:22:20.635 { 00:22:20.635 "jsonrpc": "2.0", 00:22:20.635 "method": "framework_start_init", 00:22:20.635 "id": 1 00:22:20.635 } 00:22:20.635 00:22:20.635 INFO: Requests: 00:22:20.635 { 00:22:20.635 "jsonrpc": "2.0", 00:22:20.635 "method": "framework_start_init", 00:22:20.635 "id": 1 00:22:20.635 } 00:22:20.635 00:22:20.635 [2024-04-24 19:53:02.023995] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:22:20.635 INFO: response: 00:22:20.635 { 00:22:20.635 "jsonrpc": "2.0", 00:22:20.635 "id": 1, 00:22:20.635 "result": true 00:22:20.635 } 00:22:20.635 00:22:20.635 INFO: response: 00:22:20.635 { 00:22:20.635 "jsonrpc": "2.0", 00:22:20.635 "id": 1, 00:22:20.635 "result": true 00:22:20.635 } 00:22:20.635 00:22:20.635 19:53:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.635 19:53:02 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.635 19:53:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.635 19:53:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.635 INFO: Setting log level to 40 00:22:20.635 INFO: Setting log level to 40 00:22:20.635 INFO: Setting log level to 40 00:22:20.635 [2024-04-24 19:53:02.034140] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.635 19:53:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.635 19:53:02 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:22:20.635 19:53:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:20.635 19:53:02 -- common/autotest_common.sh@10 -- # set +x 00:22:20.635 19:53:02 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:22:20.635 19:53:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.635 19:53:02 -- common/autotest_common.sh@10 -- # set +x 00:22:23.916 Nvme0n1 00:22:23.916 19:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.916 19:53:04 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:23.916 19:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.916 19:53:04 -- common/autotest_common.sh@10 -- # set +x 00:22:23.916 19:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.917 19:53:04 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:23.917 19:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.917 19:53:04 -- common/autotest_common.sh@10 -- # set +x 00:22:23.917 19:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.917 19:53:04 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.917 19:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.917 19:53:04 -- common/autotest_common.sh@10 -- # set +x 00:22:23.917 [2024-04-24 19:53:04.928689] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.917 19:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.917 19:53:04 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:23.917 19:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.917 19:53:04 -- common/autotest_common.sh@10 -- # set +x 00:22:23.917 [2024-04-24 19:53:04.936428] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:23.917 [ 00:22:23.917 { 00:22:23.917 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:23.917 "subtype": "Discovery", 00:22:23.917 "listen_addresses": [], 00:22:23.917 "allow_any_host": true, 00:22:23.917 "hosts": [] 00:22:23.917 }, 00:22:23.917 { 00:22:23.917 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.917 "subtype": "NVMe", 00:22:23.917 "listen_addresses": [ 00:22:23.917 { 00:22:23.917 "transport": "TCP", 00:22:23.917 "trtype": "TCP", 00:22:23.917 "adrfam": "IPv4", 00:22:23.917 "traddr": "10.0.0.2", 00:22:23.917 "trsvcid": "4420" 00:22:23.917 } 00:22:23.917 ], 00:22:23.917 "allow_any_host": true, 00:22:23.917 "hosts": [], 00:22:23.917 "serial_number": "SPDK00000000000001", 00:22:23.917 "model_number": "SPDK bdev Controller", 00:22:23.917 "max_namespaces": 1, 00:22:23.917 "min_cntlid": 1, 00:22:23.917 "max_cntlid": 65519, 00:22:23.917 "namespaces": [ 00:22:23.917 { 00:22:23.917 "nsid": 1, 00:22:23.917 "bdev_name": "Nvme0n1", 00:22:23.917 "name": "Nvme0n1", 00:22:23.917 "nguid": "0C8D887E24B648C0BDA7D680CB2CB233", 00:22:23.917 "uuid": "0c8d887e-24b6-48c0-bda7-d680cb2cb233" 00:22:23.917 } 00:22:23.917 ] 00:22:23.917 } 00:22:23.917 ] 00:22:23.917 19:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.917 19:53:04 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:23.917 19:53:04 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:23.917 19:53:04 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:23.917 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.917 19:53:05 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:22:23.917 19:53:05 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:23.917 19:53:05 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:23.917 19:53:05 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:23.917 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.917 19:53:05 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:22:23.917 19:53:05 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:22:23.917 19:53:05 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:22:23.917 19:53:05 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.917 19:53:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.917 19:53:05 -- common/autotest_common.sh@10 -- # set +x 00:22:23.917 19:53:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.917 19:53:05 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:23.917 19:53:05 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:23.917 19:53:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:23.917 19:53:05 -- nvmf/common.sh@117 -- # sync 00:22:23.917 19:53:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.917 19:53:05 -- nvmf/common.sh@120 -- # set +e 00:22:23.917 19:53:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.917 19:53:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.917 rmmod nvme_tcp 00:22:23.917 rmmod nvme_fabrics 00:22:23.917 rmmod nvme_keyring 00:22:23.917 19:53:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.917 19:53:05 -- nvmf/common.sh@124 -- # set -e 00:22:23.917 19:53:05 -- nvmf/common.sh@125 -- # return 0 00:22:23.917 19:53:05 -- nvmf/common.sh@478 -- # '[' -n 1785929 ']' 00:22:23.917 19:53:05 -- nvmf/common.sh@479 -- # killprocess 1785929 00:22:23.917 19:53:05 -- common/autotest_common.sh@936 -- # '[' -z 1785929 ']' 00:22:23.917 19:53:05 -- common/autotest_common.sh@940 -- # kill -0 1785929 00:22:23.917 19:53:05 -- common/autotest_common.sh@941 -- # uname 00:22:23.917 19:53:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:23.917 19:53:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1785929 00:22:24.175 19:53:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:24.175 19:53:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:24.175 19:53:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1785929' 00:22:24.175 killing process with pid 1785929 00:22:24.175 19:53:05 -- common/autotest_common.sh@955 -- # kill 1785929 00:22:24.175 [2024-04-24 19:53:05.434801] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:24.175 19:53:05 -- common/autotest_common.sh@960 -- # wait 1785929 00:22:25.548 19:53:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:25.548 19:53:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:25.548 19:53:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:25.548 19:53:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.548 19:53:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.548 19:53:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.548 19:53:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:25.548 19:53:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.077 19:53:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.077 00:22:28.077 real 0m18.010s 00:22:28.077 user 0m26.847s 00:22:28.077 sys 0m2.285s 00:22:28.077 19:53:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:28.077 19:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:28.077 ************************************ 00:22:28.077 END TEST nvmf_identify_passthru 00:22:28.077 ************************************ 00:22:28.077 19:53:09 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:22:28.077 19:53:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:28.077 19:53:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:28.077 19:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:28.077 ************************************ 00:22:28.077 START TEST nvmf_dif 00:22:28.077 ************************************ 00:22:28.077 19:53:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:22:28.077 * Looking for test storage... 00:22:28.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:28.077 19:53:09 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.077 19:53:09 -- nvmf/common.sh@7 -- # uname -s 00:22:28.077 19:53:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.077 19:53:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.077 19:53:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.077 19:53:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.077 19:53:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.077 19:53:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.077 19:53:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.077 19:53:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.077 19:53:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.077 19:53:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.077 19:53:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.077 19:53:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.077 19:53:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.077 19:53:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.077 19:53:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.077 19:53:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.077 19:53:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.077 19:53:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.077 19:53:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.077 19:53:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.077 19:53:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.077 19:53:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.077 19:53:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.077 19:53:09 -- paths/export.sh@5 -- # export PATH 00:22:28.077 19:53:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.077 19:53:09 -- nvmf/common.sh@47 -- # : 0 00:22:28.077 19:53:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.077 19:53:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.077 19:53:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.077 19:53:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.077 19:53:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.077 19:53:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.077 19:53:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.077 19:53:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.077 19:53:09 -- target/dif.sh@15 -- # NULL_META=16 00:22:28.077 19:53:09 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:28.077 19:53:09 -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:28.077 19:53:09 -- target/dif.sh@15 -- # NULL_DIF=1 00:22:28.077 19:53:09 -- target/dif.sh@135 -- # nvmftestinit 00:22:28.077 19:53:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:28.077 19:53:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.077 19:53:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:28.077 19:53:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:28.077 19:53:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:28.077 19:53:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.077 19:53:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:28.077 19:53:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.077 19:53:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:28.077 19:53:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:28.077 19:53:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.077 19:53:09 -- common/autotest_common.sh@10 -- # set +x 00:22:29.977 19:53:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:29.977 19:53:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.977 19:53:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.977 19:53:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.977 19:53:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.977 19:53:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.977 19:53:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.977 19:53:11 -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.977 19:53:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.977 19:53:11 -- nvmf/common.sh@296 -- # e810=() 00:22:29.977 19:53:11 -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.977 19:53:11 -- nvmf/common.sh@297 -- # x722=() 00:22:29.977 19:53:11 -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.977 19:53:11 -- nvmf/common.sh@298 -- # mlx=() 00:22:29.977 19:53:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.977 19:53:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.977 19:53:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.977 19:53:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.978 19:53:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.978 19:53:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.978 19:53:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:29.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:29.978 19:53:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.978 19:53:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:29.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:29.978 19:53:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.978 19:53:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.978 19:53:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.978 19:53:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:29.978 19:53:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.978 19:53:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:29.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:29.978 19:53:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.978 19:53:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.978 19:53:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.978 19:53:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:29.978 19:53:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.978 19:53:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:29.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:29.978 19:53:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.978 19:53:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:29.978 19:53:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:29.978 19:53:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:29.978 19:53:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:29.978 19:53:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.978 19:53:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.978 19:53:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.978 19:53:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.978 19:53:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.978 19:53:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.978 19:53:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.978 19:53:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.978 19:53:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.978 19:53:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.978 19:53:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.978 19:53:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.978 19:53:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.978 19:53:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.978 19:53:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.978 19:53:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:29.978 19:53:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.978 19:53:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.978 19:53:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.978 19:53:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:29.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:22:29.978 00:22:29.978 --- 10.0.0.2 ping statistics --- 00:22:29.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.978 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:22:29.978 19:53:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:22:29.978 00:22:29.978 --- 10.0.0.1 ping statistics --- 00:22:29.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.978 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:29.978 19:53:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.978 19:53:11 -- nvmf/common.sh@411 -- # return 0 00:22:29.978 19:53:11 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:22:29.978 19:53:11 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:30.927 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:30.927 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:22:30.927 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:30.927 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:30.927 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:30.927 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:30.927 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:30.927 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:30.927 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:30.927 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:22:30.927 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:22:30.927 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:22:30.927 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:22:30.927 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:22:30.927 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:22:30.927 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:22:30.927 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:22:31.186 19:53:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.186 19:53:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:31.186 19:53:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:31.186 19:53:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.186 19:53:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:31.186 19:53:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:31.186 19:53:12 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:31.186 19:53:12 -- target/dif.sh@137 -- # nvmfappstart 00:22:31.186 19:53:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:31.186 19:53:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:31.186 19:53:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 19:53:12 -- nvmf/common.sh@470 -- # nvmfpid=1789096 00:22:31.186 19:53:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:31.186 19:53:12 -- nvmf/common.sh@471 -- # waitforlisten 1789096 00:22:31.186 19:53:12 -- common/autotest_common.sh@817 -- # '[' -z 1789096 ']' 00:22:31.186 19:53:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.186 19:53:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:31.186 19:53:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.186 19:53:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:31.186 19:53:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.186 [2024-04-24 19:53:12.646380] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:22:31.186 [2024-04-24 19:53:12.646467] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.186 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.445 [2024-04-24 19:53:12.711407] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.445 [2024-04-24 19:53:12.820660] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.445 [2024-04-24 19:53:12.820722] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.445 [2024-04-24 19:53:12.820750] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.445 [2024-04-24 19:53:12.820763] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.445 [2024-04-24 19:53:12.820773] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.445 [2024-04-24 19:53:12.820804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.445 19:53:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:31.445 19:53:12 -- common/autotest_common.sh@850 -- # return 0 00:22:31.445 19:53:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:31.445 19:53:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:31.445 19:53:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 19:53:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.704 19:53:12 -- target/dif.sh@139 -- # create_transport 00:22:31.704 19:53:12 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:31.704 19:53:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.704 19:53:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 [2024-04-24 19:53:12.965867] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.704 19:53:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.704 19:53:12 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:31.704 19:53:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:31.704 19:53:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:31.704 19:53:12 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 ************************************ 00:22:31.704 START TEST fio_dif_1_default 00:22:31.704 ************************************ 00:22:31.704 19:53:13 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:22:31.704 19:53:13 -- target/dif.sh@86 -- # create_subsystems 0 00:22:31.704 19:53:13 -- target/dif.sh@28 -- # local sub 00:22:31.704 19:53:13 -- target/dif.sh@30 -- # for sub in "$@" 00:22:31.704 19:53:13 -- target/dif.sh@31 -- # create_subsystem 0 00:22:31.704 19:53:13 -- target/dif.sh@18 -- # local sub_id=0 00:22:31.704 19:53:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:31.704 19:53:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.704 19:53:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 bdev_null0 00:22:31.704 19:53:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.704 19:53:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:31.704 19:53:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.704 19:53:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 19:53:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.704 19:53:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:31.704 19:53:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.704 19:53:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 19:53:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.704 19:53:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:31.704 19:53:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.704 19:53:13 -- common/autotest_common.sh@10 -- # set +x 00:22:31.704 [2024-04-24 19:53:13.090335] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.704 19:53:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.704 19:53:13 -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:31.704 19:53:13 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:31.704 19:53:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:31.704 19:53:13 -- nvmf/common.sh@521 -- # config=() 00:22:31.704 19:53:13 -- nvmf/common.sh@521 -- # local subsystem config 00:22:31.704 19:53:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:31.704 19:53:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:31.704 19:53:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:31.704 { 00:22:31.704 "params": { 00:22:31.704 "name": "Nvme$subsystem", 00:22:31.704 "trtype": "$TEST_TRANSPORT", 00:22:31.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.704 "adrfam": "ipv4", 00:22:31.704 "trsvcid": "$NVMF_PORT", 00:22:31.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.704 "hdgst": ${hdgst:-false}, 00:22:31.704 "ddgst": ${ddgst:-false} 00:22:31.704 }, 00:22:31.704 "method": "bdev_nvme_attach_controller" 00:22:31.704 } 00:22:31.704 EOF 00:22:31.704 )") 00:22:31.704 19:53:13 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:31.704 19:53:13 -- target/dif.sh@82 -- # gen_fio_conf 00:22:31.704 19:53:13 -- target/dif.sh@54 -- # local file 00:22:31.704 19:53:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:31.704 19:53:13 -- target/dif.sh@56 -- # cat 00:22:31.704 19:53:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:31.704 19:53:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:31.704 19:53:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:31.704 19:53:13 -- common/autotest_common.sh@1327 -- # shift 00:22:31.704 19:53:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:31.704 19:53:13 -- nvmf/common.sh@543 -- # cat 00:22:31.704 19:53:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:31.704 19:53:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:31.704 19:53:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:31.704 19:53:13 -- target/dif.sh@72 -- # (( file <= files )) 00:22:31.704 19:53:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:31.704 19:53:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:31.704 19:53:13 -- nvmf/common.sh@545 -- # jq . 00:22:31.704 19:53:13 -- nvmf/common.sh@546 -- # IFS=, 00:22:31.704 19:53:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:31.704 "params": { 00:22:31.704 "name": "Nvme0", 00:22:31.704 "trtype": "tcp", 00:22:31.704 "traddr": "10.0.0.2", 00:22:31.704 "adrfam": "ipv4", 00:22:31.704 "trsvcid": "4420", 00:22:31.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:31.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:31.704 "hdgst": false, 00:22:31.704 "ddgst": false 00:22:31.704 }, 00:22:31.704 "method": "bdev_nvme_attach_controller" 00:22:31.704 }' 00:22:31.704 19:53:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:31.704 19:53:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:31.704 19:53:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:31.704 19:53:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:31.704 19:53:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:31.704 19:53:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:31.704 19:53:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:31.704 19:53:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:31.704 19:53:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:31.704 19:53:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:31.985 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:31.986 fio-3.35 00:22:31.986 Starting 1 thread 00:22:31.986 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.215 00:22:44.215 filename0: (groupid=0, jobs=1): err= 0: pid=1789333: Wed Apr 24 19:53:23 2024 00:22:44.215 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10020msec) 00:22:44.215 slat (nsec): min=4378, max=61129, avg=9130.51, stdev=4371.54 00:22:44.215 clat (usec): min=945, max=44150, avg=21564.92, stdev=20425.56 00:22:44.215 lat (usec): min=952, max=44174, avg=21574.05, stdev=20424.92 00:22:44.215 clat percentiles (usec): 00:22:44.215 | 1.00th=[ 963], 5.00th=[ 988], 10.00th=[ 1020], 20.00th=[ 1057], 00:22:44.215 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[41681], 60.00th=[41681], 00:22:44.215 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:22:44.215 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:22:44.215 | 99.99th=[44303] 00:22:44.215 bw ( KiB/s): min= 672, max= 768, per=99.88%, avg=740.80, stdev=34.86, samples=20 00:22:44.215 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:22:44.215 lat (usec) : 1000=7.49% 00:22:44.215 lat (msec) : 2=42.30%, 50=50.22% 00:22:44.215 cpu : usr=89.82%, sys=9.89%, ctx=16, majf=0, minf=255 00:22:44.215 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:44.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.215 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.215 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:44.215 00:22:44.215 Run status group 0 (all jobs): 00:22:44.216 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10020-10020msec 00:22:44.216 19:53:24 -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:44.216 19:53:24 -- target/dif.sh@43 -- # local sub 00:22:44.216 19:53:24 -- target/dif.sh@45 -- # for sub in "$@" 00:22:44.216 19:53:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:44.216 19:53:24 -- target/dif.sh@36 -- # local sub_id=0 00:22:44.216 19:53:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 19:53:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 00:22:44.216 real 0m11.183s 00:22:44.216 user 0m10.184s 00:22:44.216 sys 0m1.235s 00:22:44.216 19:53:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 ************************************ 00:22:44.216 END TEST fio_dif_1_default 00:22:44.216 ************************************ 00:22:44.216 19:53:24 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:44.216 19:53:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:44.216 19:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 ************************************ 00:22:44.216 START TEST fio_dif_1_multi_subsystems 00:22:44.216 ************************************ 00:22:44.216 19:53:24 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:22:44.216 19:53:24 -- target/dif.sh@92 -- # local files=1 00:22:44.216 19:53:24 -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:44.216 19:53:24 -- target/dif.sh@28 -- # local sub 00:22:44.216 19:53:24 -- target/dif.sh@30 -- # for sub in "$@" 00:22:44.216 19:53:24 -- target/dif.sh@31 -- # create_subsystem 0 00:22:44.216 19:53:24 -- target/dif.sh@18 -- # local sub_id=0 00:22:44.216 19:53:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 bdev_null0 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 19:53:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 19:53:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 19:53:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 [2024-04-24 19:53:24.399735] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 19:53:24 -- target/dif.sh@30 -- # for sub in "$@" 00:22:44.216 19:53:24 -- target/dif.sh@31 -- # create_subsystem 1 00:22:44.216 19:53:24 -- target/dif.sh@18 -- # local sub_id=1 00:22:44.216 19:53:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 bdev_null1 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 19:53:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 19:53:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 19:53:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.216 19:53:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:44.216 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 19:53:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:44.216 19:53:24 -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:44.216 19:53:24 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:44.216 19:53:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:44.216 19:53:24 -- nvmf/common.sh@521 -- # config=() 00:22:44.216 19:53:24 -- nvmf/common.sh@521 -- # local subsystem config 00:22:44.216 19:53:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:44.216 19:53:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:44.216 19:53:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:44.216 { 00:22:44.216 "params": { 00:22:44.216 "name": "Nvme$subsystem", 00:22:44.216 "trtype": "$TEST_TRANSPORT", 00:22:44.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.216 "adrfam": "ipv4", 00:22:44.216 "trsvcid": "$NVMF_PORT", 00:22:44.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.216 "hdgst": ${hdgst:-false}, 00:22:44.216 "ddgst": ${ddgst:-false} 00:22:44.216 }, 00:22:44.216 "method": "bdev_nvme_attach_controller" 00:22:44.216 } 00:22:44.216 EOF 00:22:44.216 )") 00:22:44.216 19:53:24 -- target/dif.sh@82 -- # gen_fio_conf 00:22:44.216 19:53:24 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:44.216 19:53:24 -- target/dif.sh@54 -- # local file 00:22:44.216 19:53:24 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:44.216 19:53:24 -- target/dif.sh@56 -- # cat 00:22:44.216 19:53:24 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:44.216 19:53:24 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:44.216 19:53:24 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:44.216 19:53:24 -- common/autotest_common.sh@1327 -- # shift 00:22:44.216 19:53:24 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:44.216 19:53:24 -- nvmf/common.sh@543 -- # cat 00:22:44.216 19:53:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.216 19:53:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:44.216 19:53:24 -- target/dif.sh@72 -- # (( file <= files )) 00:22:44.216 19:53:24 -- target/dif.sh@73 -- # cat 00:22:44.216 19:53:24 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:44.216 19:53:24 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:44.216 19:53:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:44.216 19:53:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:44.216 19:53:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:44.216 { 00:22:44.216 "params": { 00:22:44.216 "name": "Nvme$subsystem", 00:22:44.216 "trtype": "$TEST_TRANSPORT", 00:22:44.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:44.216 "adrfam": "ipv4", 00:22:44.216 "trsvcid": "$NVMF_PORT", 00:22:44.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:44.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:44.216 "hdgst": ${hdgst:-false}, 00:22:44.216 "ddgst": ${ddgst:-false} 00:22:44.216 }, 00:22:44.216 "method": "bdev_nvme_attach_controller" 00:22:44.216 } 00:22:44.216 EOF 00:22:44.216 )") 00:22:44.216 19:53:24 -- nvmf/common.sh@543 -- # cat 00:22:44.216 19:53:24 -- target/dif.sh@72 -- # (( file++ )) 00:22:44.216 19:53:24 -- target/dif.sh@72 -- # (( file <= files )) 00:22:44.216 19:53:24 -- nvmf/common.sh@545 -- # jq . 00:22:44.216 19:53:24 -- nvmf/common.sh@546 -- # IFS=, 00:22:44.216 19:53:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:44.216 "params": { 00:22:44.216 "name": "Nvme0", 00:22:44.216 "trtype": "tcp", 00:22:44.216 "traddr": "10.0.0.2", 00:22:44.216 "adrfam": "ipv4", 00:22:44.216 "trsvcid": "4420", 00:22:44.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:44.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:44.216 "hdgst": false, 00:22:44.216 "ddgst": false 00:22:44.216 }, 00:22:44.216 "method": "bdev_nvme_attach_controller" 00:22:44.216 },{ 00:22:44.216 "params": { 00:22:44.216 "name": "Nvme1", 00:22:44.216 "trtype": "tcp", 00:22:44.216 "traddr": "10.0.0.2", 00:22:44.216 "adrfam": "ipv4", 00:22:44.216 "trsvcid": "4420", 00:22:44.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:44.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:44.216 "hdgst": false, 00:22:44.216 "ddgst": false 00:22:44.216 }, 00:22:44.216 "method": "bdev_nvme_attach_controller" 00:22:44.216 }' 00:22:44.216 19:53:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:44.216 19:53:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:44.216 19:53:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:44.216 19:53:24 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:44.216 19:53:24 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:44.216 19:53:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:44.216 19:53:24 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:44.216 19:53:24 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:44.216 19:53:24 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:44.216 19:53:24 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:44.216 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:44.216 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:44.216 fio-3.35 00:22:44.216 Starting 2 threads 00:22:44.216 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.185 00:22:54.185 filename0: (groupid=0, jobs=1): err= 0: pid=1790864: Wed Apr 24 19:53:35 2024 00:22:54.185 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10020msec) 00:22:54.185 slat (nsec): min=6933, max=30898, avg=9866.12, stdev=4561.52 00:22:54.185 clat (usec): min=889, max=46386, avg=21516.89, stdev=20456.42 00:22:54.185 lat (usec): min=911, max=46406, avg=21526.76, stdev=20456.36 00:22:54.185 clat percentiles (usec): 00:22:54.185 | 1.00th=[ 955], 5.00th=[ 979], 10.00th=[ 996], 20.00th=[ 1012], 00:22:54.185 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[41157], 60.00th=[41681], 00:22:54.185 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:22:54.185 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:22:54.185 | 99.99th=[46400] 00:22:54.185 bw ( KiB/s): min= 704, max= 768, per=66.01%, avg=742.40, stdev=32.17, samples=20 00:22:54.185 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:22:54.185 lat (usec) : 1000=13.23% 00:22:54.185 lat (msec) : 2=36.67%, 50=50.11% 00:22:54.185 cpu : usr=93.98%, sys=5.71%, ctx=16, majf=0, minf=62 00:22:54.185 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:54.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.185 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.185 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.185 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:54.185 filename1: (groupid=0, jobs=1): err= 0: pid=1790865: Wed Apr 24 19:53:35 2024 00:22:54.185 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10021msec) 00:22:54.185 slat (nsec): min=4648, max=34821, avg=11147.15, stdev=6020.26 00:22:54.185 clat (usec): min=40928, max=47268, avg=41890.81, stdev=453.24 00:22:54.185 lat (usec): min=40936, max=47303, avg=41901.96, stdev=453.76 00:22:54.185 clat percentiles (usec): 00:22:54.185 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:22:54.185 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:22:54.185 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:22:54.185 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:22:54.185 | 99.99th=[47449] 00:22:54.185 bw ( KiB/s): min= 352, max= 384, per=33.81%, avg=380.80, stdev= 9.85, samples=20 00:22:54.185 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:22:54.185 lat (msec) : 50=100.00% 00:22:54.186 cpu : usr=95.02%, sys=4.68%, ctx=13, majf=0, minf=182 00:22:54.186 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:54.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.186 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.186 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:54.186 00:22:54.186 Run status group 0 (all jobs): 00:22:54.186 READ: bw=1124KiB/s (1151kB/s), 382KiB/s-743KiB/s (391kB/s-760kB/s), io=11.0MiB (11.5MB), run=10020-10021msec 00:22:54.444 19:53:35 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:54.444 19:53:35 -- target/dif.sh@43 -- # local sub 00:22:54.444 19:53:35 -- target/dif.sh@45 -- # for sub in "$@" 00:22:54.444 19:53:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:54.444 19:53:35 -- target/dif.sh@36 -- # local sub_id=0 00:22:54.444 19:53:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:54.444 19:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.444 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.444 19:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.444 19:53:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:54.444 19:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.444 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.444 19:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.444 19:53:35 -- target/dif.sh@45 -- # for sub in "$@" 00:22:54.444 19:53:35 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:54.444 19:53:35 -- target/dif.sh@36 -- # local sub_id=1 00:22:54.444 19:53:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.444 19:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.444 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.444 19:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.444 19:53:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:54.444 19:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.444 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.445 19:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.445 00:22:54.445 real 0m11.434s 00:22:54.445 user 0m20.237s 00:22:54.445 sys 0m1.334s 00:22:54.445 19:53:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:54.445 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.445 ************************************ 00:22:54.445 END TEST fio_dif_1_multi_subsystems 00:22:54.445 ************************************ 00:22:54.445 19:53:35 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:54.445 19:53:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:54.445 19:53:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:54.445 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.445 ************************************ 00:22:54.445 START TEST fio_dif_rand_params 00:22:54.445 ************************************ 00:22:54.445 19:53:35 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:22:54.445 19:53:35 -- target/dif.sh@100 -- # local NULL_DIF 00:22:54.445 19:53:35 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:54.445 19:53:35 -- target/dif.sh@103 -- # NULL_DIF=3 00:22:54.445 19:53:35 -- target/dif.sh@103 -- # bs=128k 00:22:54.445 19:53:35 -- target/dif.sh@103 -- # numjobs=3 00:22:54.445 19:53:35 -- target/dif.sh@103 -- # iodepth=3 00:22:54.445 19:53:35 -- target/dif.sh@103 -- # runtime=5 00:22:54.445 19:53:35 -- target/dif.sh@105 -- # create_subsystems 0 00:22:54.445 19:53:35 -- target/dif.sh@28 -- # local sub 00:22:54.445 19:53:35 -- target/dif.sh@30 -- # for sub in "$@" 00:22:54.445 19:53:35 -- target/dif.sh@31 -- # create_subsystem 0 00:22:54.445 19:53:35 -- target/dif.sh@18 -- # local sub_id=0 00:22:54.445 19:53:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:54.445 19:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.445 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.445 bdev_null0 00:22:54.445 19:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.445 19:53:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:54.445 19:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.445 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.445 19:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.445 19:53:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:54.445 19:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.445 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.445 19:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.445 19:53:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:54.445 19:53:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.445 19:53:35 -- common/autotest_common.sh@10 -- # set +x 00:22:54.445 [2024-04-24 19:53:35.934606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.445 19:53:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.445 19:53:35 -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:54.445 19:53:35 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:54.445 19:53:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:54.445 19:53:35 -- nvmf/common.sh@521 -- # config=() 00:22:54.445 19:53:35 -- nvmf/common.sh@521 -- # local subsystem config 00:22:54.445 19:53:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:54.445 19:53:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:54.445 { 00:22:54.445 "params": { 00:22:54.445 "name": "Nvme$subsystem", 00:22:54.445 "trtype": "$TEST_TRANSPORT", 00:22:54.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:54.445 "adrfam": "ipv4", 00:22:54.445 "trsvcid": "$NVMF_PORT", 00:22:54.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:54.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:54.445 "hdgst": ${hdgst:-false}, 00:22:54.445 "ddgst": ${ddgst:-false} 00:22:54.445 }, 00:22:54.445 "method": "bdev_nvme_attach_controller" 00:22:54.445 } 00:22:54.445 EOF 00:22:54.445 )") 00:22:54.445 19:53:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:54.445 19:53:35 -- target/dif.sh@82 -- # gen_fio_conf 00:22:54.445 19:53:35 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:54.445 19:53:35 -- target/dif.sh@54 -- # local file 00:22:54.445 19:53:35 -- target/dif.sh@56 -- # cat 00:22:54.445 19:53:35 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:54.445 19:53:35 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:54.445 19:53:35 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:54.445 19:53:35 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:54.445 19:53:35 -- nvmf/common.sh@543 -- # cat 00:22:54.445 19:53:35 -- common/autotest_common.sh@1327 -- # shift 00:22:54.445 19:53:35 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:54.445 19:53:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:54.445 19:53:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:54.445 19:53:35 -- target/dif.sh@72 -- # (( file <= files )) 00:22:54.445 19:53:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:54.445 19:53:35 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:54.445 19:53:35 -- nvmf/common.sh@545 -- # jq . 00:22:54.445 19:53:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:54.445 19:53:35 -- nvmf/common.sh@546 -- # IFS=, 00:22:54.445 19:53:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:54.445 "params": { 00:22:54.445 "name": "Nvme0", 00:22:54.445 "trtype": "tcp", 00:22:54.445 "traddr": "10.0.0.2", 00:22:54.445 "adrfam": "ipv4", 00:22:54.445 "trsvcid": "4420", 00:22:54.445 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:54.445 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:54.445 "hdgst": false, 00:22:54.445 "ddgst": false 00:22:54.445 }, 00:22:54.445 "method": "bdev_nvme_attach_controller" 00:22:54.445 }' 00:22:54.703 19:53:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:54.703 19:53:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:54.703 19:53:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:54.703 19:53:35 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:54.703 19:53:35 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:54.703 19:53:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:54.703 19:53:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:54.703 19:53:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:54.703 19:53:35 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:54.703 19:53:35 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:54.704 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:54.704 ... 00:22:54.704 fio-3.35 00:22:54.704 Starting 3 threads 00:22:54.962 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.534 00:23:01.534 filename0: (groupid=0, jobs=1): err= 0: pid=1792272: Wed Apr 24 19:53:41 2024 00:23:01.534 read: IOPS=157, BW=19.7MiB/s (20.7MB/s)(99.0MiB/5026msec) 00:23:01.534 slat (nsec): min=6914, max=41093, avg=13909.52, stdev=4737.86 00:23:01.534 clat (usec): min=5788, max=92330, avg=19009.11, stdev=18377.99 00:23:01.534 lat (usec): min=5806, max=92348, avg=19023.02, stdev=18378.08 00:23:01.534 clat percentiles (usec): 00:23:01.534 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 7439], 00:23:01.534 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[12387], 00:23:01.534 | 70.00th=[13435], 80.00th=[48497], 90.00th=[51643], 95.00th=[53216], 00:23:01.534 | 99.00th=[89654], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:23:01.534 | 99.99th=[92799] 00:23:01.534 bw ( KiB/s): min=15360, max=27648, per=30.41%, avg=20198.40, stdev=4231.45, samples=10 00:23:01.534 iops : min= 120, max= 216, avg=157.80, stdev=33.06, samples=10 00:23:01.534 lat (msec) : 10=44.19%, 20=34.85%, 50=4.67%, 100=16.29% 00:23:01.534 cpu : usr=95.22%, sys=4.24%, ctx=12, majf=0, minf=107 00:23:01.534 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:01.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.534 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.534 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:01.534 filename0: (groupid=0, jobs=1): err= 0: pid=1792273: Wed Apr 24 19:53:41 2024 00:23:01.534 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(139MiB/5005msec) 00:23:01.534 slat (nsec): min=4700, max=31495, avg=13486.06, stdev=3646.21 00:23:01.534 clat (usec): min=5627, max=91069, avg=13481.48, stdev=13413.41 00:23:01.534 lat (usec): min=5639, max=91082, avg=13494.96, stdev=13413.33 00:23:01.534 clat percentiles (usec): 00:23:01.534 | 1.00th=[ 5866], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 7111], 00:23:01.534 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:23:01.534 | 70.00th=[10814], 80.00th=[11731], 90.00th=[47973], 95.00th=[50070], 00:23:01.534 | 99.00th=[52691], 99.50th=[59507], 99.90th=[90702], 99.95th=[90702], 00:23:01.534 | 99.99th=[90702] 00:23:01.534 bw ( KiB/s): min=17408, max=42240, per=42.75%, avg=28390.40, stdev=7450.39, samples=10 00:23:01.534 iops : min= 136, max= 330, avg=221.80, stdev=58.21, samples=10 00:23:01.534 lat (msec) : 10=62.23%, 20=27.34%, 50=5.22%, 100=5.22% 00:23:01.534 cpu : usr=94.24%, sys=5.08%, ctx=28, majf=0, minf=139 00:23:01.534 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:01.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.535 issued rwts: total=1112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.535 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:01.535 filename0: (groupid=0, jobs=1): err= 0: pid=1792274: Wed Apr 24 19:53:41 2024 00:23:01.535 read: IOPS=141, BW=17.7MiB/s (18.5MB/s)(89.1MiB/5044msec) 00:23:01.535 slat (nsec): min=4812, max=33118, avg=13828.32, stdev=3342.01 00:23:01.535 clat (usec): min=7366, max=97173, avg=21144.50, stdev=17744.29 00:23:01.535 lat (usec): min=7378, max=97188, avg=21158.33, stdev=17743.98 00:23:01.535 clat percentiles (usec): 00:23:01.535 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10421], 00:23:01.535 | 30.00th=[11076], 40.00th=[11731], 50.00th=[13042], 60.00th=[14222], 00:23:01.535 | 70.00th=[15664], 80.00th=[49021], 90.00th=[52167], 95.00th=[54264], 00:23:01.535 | 99.00th=[65274], 99.50th=[92799], 99.90th=[96994], 99.95th=[96994], 00:23:01.535 | 99.99th=[96994] 00:23:01.535 bw ( KiB/s): min=14080, max=23296, per=27.37%, avg=18179.50, stdev=3344.09, samples=10 00:23:01.535 iops : min= 110, max= 182, avg=142.00, stdev=26.13, samples=10 00:23:01.535 lat (msec) : 10=15.85%, 20=62.83%, 50=3.23%, 100=18.09% 00:23:01.535 cpu : usr=96.05%, sys=3.57%, ctx=7, majf=0, minf=54 00:23:01.535 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:01.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.535 issued rwts: total=713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.535 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:01.535 00:23:01.535 Run status group 0 (all jobs): 00:23:01.535 READ: bw=64.9MiB/s (68.0MB/s), 17.7MiB/s-27.8MiB/s (18.5MB/s-29.1MB/s), io=327MiB (343MB), run=5005-5044msec 00:23:01.535 19:53:41 -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:01.535 19:53:41 -- target/dif.sh@43 -- # local sub 00:23:01.535 19:53:41 -- target/dif.sh@45 -- # for sub in "$@" 00:23:01.535 19:53:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:01.535 19:53:41 -- target/dif.sh@36 -- # local sub_id=0 00:23:01.535 19:53:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:01.535 19:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:41 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:01.535 19:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:41 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:41 -- target/dif.sh@109 -- # NULL_DIF=2 00:23:01.535 19:53:41 -- target/dif.sh@109 -- # bs=4k 00:23:01.535 19:53:41 -- target/dif.sh@109 -- # numjobs=8 00:23:01.535 19:53:41 -- target/dif.sh@109 -- # iodepth=16 00:23:01.535 19:53:41 -- target/dif.sh@109 -- # runtime= 00:23:01.535 19:53:41 -- target/dif.sh@109 -- # files=2 00:23:01.535 19:53:41 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:01.535 19:53:41 -- target/dif.sh@28 -- # local sub 00:23:01.535 19:53:41 -- target/dif.sh@30 -- # for sub in "$@" 00:23:01.535 19:53:41 -- target/dif.sh@31 -- # create_subsystem 0 00:23:01.535 19:53:41 -- target/dif.sh@18 -- # local sub_id=0 00:23:01.535 19:53:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:01.535 19:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:41 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 bdev_null0 00:23:01.535 19:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:01.535 19:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:41 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 [2024-04-24 19:53:42.019089] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@30 -- # for sub in "$@" 00:23:01.535 19:53:42 -- target/dif.sh@31 -- # create_subsystem 1 00:23:01.535 19:53:42 -- target/dif.sh@18 -- # local sub_id=1 00:23:01.535 19:53:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 bdev_null1 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@30 -- # for sub in "$@" 00:23:01.535 19:53:42 -- target/dif.sh@31 -- # create_subsystem 2 00:23:01.535 19:53:42 -- target/dif.sh@18 -- # local sub_id=2 00:23:01.535 19:53:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 bdev_null2 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:01.535 19:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.535 19:53:42 -- common/autotest_common.sh@10 -- # set +x 00:23:01.535 19:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.535 19:53:42 -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:01.535 19:53:42 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:01.535 19:53:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:01.535 19:53:42 -- nvmf/common.sh@521 -- # config=() 00:23:01.535 19:53:42 -- nvmf/common.sh@521 -- # local subsystem config 00:23:01.535 19:53:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:01.535 19:53:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.535 19:53:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:01.535 { 00:23:01.535 "params": { 00:23:01.535 "name": "Nvme$subsystem", 00:23:01.535 "trtype": "$TEST_TRANSPORT", 00:23:01.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.535 "adrfam": "ipv4", 00:23:01.535 "trsvcid": "$NVMF_PORT", 00:23:01.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.535 "hdgst": ${hdgst:-false}, 00:23:01.535 "ddgst": ${ddgst:-false} 00:23:01.535 }, 00:23:01.535 "method": "bdev_nvme_attach_controller" 00:23:01.535 } 00:23:01.535 EOF 00:23:01.535 )") 00:23:01.535 19:53:42 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.535 19:53:42 -- target/dif.sh@82 -- # gen_fio_conf 00:23:01.535 19:53:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:01.535 19:53:42 -- target/dif.sh@54 -- # local file 00:23:01.535 19:53:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:01.535 19:53:42 -- target/dif.sh@56 -- # cat 00:23:01.535 19:53:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:01.535 19:53:42 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:01.535 19:53:42 -- common/autotest_common.sh@1327 -- # shift 00:23:01.535 19:53:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:01.535 19:53:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.535 19:53:42 -- nvmf/common.sh@543 -- # cat 00:23:01.535 19:53:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:01.535 19:53:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:01.535 19:53:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:01.535 19:53:42 -- target/dif.sh@72 -- # (( file <= files )) 00:23:01.535 19:53:42 -- target/dif.sh@73 -- # cat 00:23:01.535 19:53:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:01.535 19:53:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:01.535 19:53:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:01.535 { 00:23:01.535 "params": { 00:23:01.535 "name": "Nvme$subsystem", 00:23:01.535 "trtype": "$TEST_TRANSPORT", 00:23:01.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.535 "adrfam": "ipv4", 00:23:01.535 "trsvcid": "$NVMF_PORT", 00:23:01.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.536 "hdgst": ${hdgst:-false}, 00:23:01.536 "ddgst": ${ddgst:-false} 00:23:01.536 }, 00:23:01.536 "method": "bdev_nvme_attach_controller" 00:23:01.536 } 00:23:01.536 EOF 00:23:01.536 )") 00:23:01.536 19:53:42 -- nvmf/common.sh@543 -- # cat 00:23:01.536 19:53:42 -- target/dif.sh@72 -- # (( file++ )) 00:23:01.536 19:53:42 -- target/dif.sh@72 -- # (( file <= files )) 00:23:01.536 19:53:42 -- target/dif.sh@73 -- # cat 00:23:01.536 19:53:42 -- target/dif.sh@72 -- # (( file++ )) 00:23:01.536 19:53:42 -- target/dif.sh@72 -- # (( file <= files )) 00:23:01.536 19:53:42 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:01.536 19:53:42 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:01.536 { 00:23:01.536 "params": { 00:23:01.536 "name": "Nvme$subsystem", 00:23:01.536 "trtype": "$TEST_TRANSPORT", 00:23:01.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.536 "adrfam": "ipv4", 00:23:01.536 "trsvcid": "$NVMF_PORT", 00:23:01.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.536 "hdgst": ${hdgst:-false}, 00:23:01.536 "ddgst": ${ddgst:-false} 00:23:01.536 }, 00:23:01.536 "method": "bdev_nvme_attach_controller" 00:23:01.536 } 00:23:01.536 EOF 00:23:01.536 )") 00:23:01.536 19:53:42 -- nvmf/common.sh@543 -- # cat 00:23:01.536 19:53:42 -- nvmf/common.sh@545 -- # jq . 00:23:01.536 19:53:42 -- nvmf/common.sh@546 -- # IFS=, 00:23:01.536 19:53:42 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:01.536 "params": { 00:23:01.536 "name": "Nvme0", 00:23:01.536 "trtype": "tcp", 00:23:01.536 "traddr": "10.0.0.2", 00:23:01.536 "adrfam": "ipv4", 00:23:01.536 "trsvcid": "4420", 00:23:01.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.536 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:01.536 "hdgst": false, 00:23:01.536 "ddgst": false 00:23:01.536 }, 00:23:01.536 "method": "bdev_nvme_attach_controller" 00:23:01.536 },{ 00:23:01.536 "params": { 00:23:01.536 "name": "Nvme1", 00:23:01.536 "trtype": "tcp", 00:23:01.536 "traddr": "10.0.0.2", 00:23:01.536 "adrfam": "ipv4", 00:23:01.536 "trsvcid": "4420", 00:23:01.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.536 "hdgst": false, 00:23:01.536 "ddgst": false 00:23:01.536 }, 00:23:01.536 "method": "bdev_nvme_attach_controller" 00:23:01.536 },{ 00:23:01.536 "params": { 00:23:01.536 "name": "Nvme2", 00:23:01.536 "trtype": "tcp", 00:23:01.536 "traddr": "10.0.0.2", 00:23:01.536 "adrfam": "ipv4", 00:23:01.536 "trsvcid": "4420", 00:23:01.536 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.536 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.536 "hdgst": false, 00:23:01.536 "ddgst": false 00:23:01.536 }, 00:23:01.536 "method": "bdev_nvme_attach_controller" 00:23:01.536 }' 00:23:01.536 19:53:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:01.536 19:53:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:01.536 19:53:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.536 19:53:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:01.536 19:53:42 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:01.536 19:53:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:01.536 19:53:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:01.536 19:53:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:01.536 19:53:42 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:01.536 19:53:42 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.536 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:01.536 ... 00:23:01.536 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:01.536 ... 00:23:01.536 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:01.536 ... 00:23:01.536 fio-3.35 00:23:01.536 Starting 24 threads 00:23:01.536 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.746 00:23:13.746 filename0: (groupid=0, jobs=1): err= 0: pid=1793136: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10011msec) 00:23:13.746 slat (usec): min=5, max=1109, avg=29.67, stdev=30.33 00:23:13.746 clat (usec): min=10409, max=54769, avg=33443.65, stdev=3410.97 00:23:13.746 lat (usec): min=10427, max=54809, avg=33473.32, stdev=3410.89 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[17433], 5.00th=[30016], 10.00th=[32375], 20.00th=[32900], 00:23:13.746 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.746 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[38011], 00:23:13.746 | 99.00th=[43779], 99.50th=[49546], 99.90th=[54789], 99.95th=[54789], 00:23:13.746 | 99.99th=[54789] 00:23:13.746 bw ( KiB/s): min= 1792, max= 1952, per=4.17%, avg=1895.55, stdev=51.59, samples=20 00:23:13.746 iops : min= 448, max= 488, avg=473.85, stdev=12.88, samples=20 00:23:13.746 lat (msec) : 20=1.09%, 50=98.47%, 100=0.44% 00:23:13.746 cpu : usr=95.10%, sys=2.68%, ctx=140, majf=0, minf=66 00:23:13.746 IO depths : 1=2.5%, 2=6.6%, 4=17.1%, 8=63.7%, 16=10.1%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename0: (groupid=0, jobs=1): err= 0: pid=1793137: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10007msec) 00:23:13.746 slat (usec): min=8, max=155, avg=42.37, stdev=19.34 00:23:13.746 clat (usec): min=21638, max=59373, avg=33577.51, stdev=1980.93 00:23:13.746 lat (usec): min=21647, max=59383, avg=33619.88, stdev=1979.74 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[29754], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.746 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:23:13.746 | 99.00th=[41681], 99.50th=[43779], 99.90th=[52167], 99.95th=[56886], 00:23:13.746 | 99.99th=[59507] 00:23:13.746 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1886.11, stdev=56.04, samples=19 00:23:13.746 iops : min= 448, max= 480, avg=471.53, stdev=14.01, samples=19 00:23:13.746 lat (msec) : 50=99.62%, 100=0.38% 00:23:13.746 cpu : usr=94.81%, sys=2.91%, ctx=43, majf=0, minf=44 00:23:13.746 IO depths : 1=4.4%, 2=10.2%, 4=23.8%, 8=53.5%, 16=8.1%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename0: (groupid=0, jobs=1): err= 0: pid=1793138: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=470, BW=1882KiB/s (1927kB/s)(18.4MiB/10008msec) 00:23:13.746 slat (usec): min=8, max=139, avg=46.13, stdev=22.38 00:23:13.746 clat (usec): min=7747, max=89351, avg=33657.79, stdev=4838.46 00:23:13.746 lat (usec): min=7756, max=89397, avg=33703.92, stdev=4837.41 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[20841], 5.00th=[31327], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:23:13.746 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[38011], 00:23:13.746 | 99.00th=[51119], 99.50th=[55837], 99.90th=[89654], 99.95th=[89654], 00:23:13.746 | 99.99th=[89654] 00:23:13.746 bw ( KiB/s): min= 1539, max= 1984, per=4.12%, avg=1873.84, stdev=101.05, samples=19 00:23:13.746 iops : min= 384, max= 496, avg=468.42, stdev=25.40, samples=19 00:23:13.746 lat (msec) : 10=0.15%, 20=0.72%, 50=97.94%, 100=1.19% 00:23:13.746 cpu : usr=98.18%, sys=1.34%, ctx=72, majf=0, minf=48 00:23:13.746 IO depths : 1=1.4%, 2=6.5%, 4=21.1%, 8=59.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=93.5%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename0: (groupid=0, jobs=1): err= 0: pid=1793139: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10008msec) 00:23:13.746 slat (usec): min=7, max=242, avg=35.51, stdev=19.72 00:23:13.746 clat (usec): min=13442, max=47032, avg=33411.46, stdev=2122.51 00:23:13.746 lat (usec): min=13452, max=47047, avg=33446.97, stdev=2123.17 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[26870], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.746 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:23:13.746 | 99.00th=[40109], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:23:13.746 | 99.99th=[46924] 00:23:13.746 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1898.95, stdev=46.03, samples=19 00:23:13.746 iops : min= 448, max= 480, avg=474.74, stdev=11.51, samples=19 00:23:13.746 lat (msec) : 20=0.67%, 50=99.33% 00:23:13.746 cpu : usr=98.20%, sys=1.39%, ctx=24, majf=0, minf=36 00:23:13.746 IO depths : 1=4.3%, 2=10.4%, 4=24.5%, 8=52.6%, 16=8.3%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename0: (groupid=0, jobs=1): err= 0: pid=1793140: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=470, BW=1880KiB/s (1925kB/s)(18.4MiB/10017msec) 00:23:13.746 slat (usec): min=8, max=134, avg=35.09, stdev=22.62 00:23:13.746 clat (usec): min=8105, max=66022, avg=33765.64, stdev=3266.54 00:23:13.746 lat (usec): min=8180, max=66037, avg=33800.74, stdev=3265.97 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[27132], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.746 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36963], 00:23:13.746 | 99.00th=[47973], 99.50th=[57410], 99.90th=[61604], 99.95th=[65799], 00:23:13.746 | 99.99th=[65799] 00:23:13.746 bw ( KiB/s): min= 1760, max= 2048, per=4.12%, avg=1874.32, stdev=79.21, samples=19 00:23:13.746 iops : min= 440, max= 512, avg=468.58, stdev=19.80, samples=19 00:23:13.746 lat (msec) : 10=0.13%, 20=0.34%, 50=98.66%, 100=0.87% 00:23:13.746 cpu : usr=98.09%, sys=1.49%, ctx=22, majf=0, minf=66 00:23:13.746 IO depths : 1=3.5%, 2=8.9%, 4=22.5%, 8=55.7%, 16=9.4%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=93.7%, 8=1.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename0: (groupid=0, jobs=1): err= 0: pid=1793141: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10009msec) 00:23:13.746 slat (usec): min=10, max=137, avg=43.89, stdev=14.14 00:23:13.746 clat (usec): min=10768, max=61548, avg=33410.48, stdev=2359.72 00:23:13.746 lat (usec): min=10782, max=61586, avg=33454.37, stdev=2360.05 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[31327], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:23:13.746 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:23:13.746 | 99.00th=[39584], 99.50th=[43254], 99.90th=[61604], 99.95th=[61604], 00:23:13.746 | 99.99th=[61604] 00:23:13.746 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1886.32, stdev=71.93, samples=19 00:23:13.746 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:23:13.746 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:23:13.746 cpu : usr=96.29%, sys=2.09%, ctx=109, majf=0, minf=41 00:23:13.746 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename0: (groupid=0, jobs=1): err= 0: pid=1793142: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10018msec) 00:23:13.746 slat (usec): min=8, max=458, avg=46.04, stdev=18.38 00:23:13.746 clat (usec): min=19508, max=53589, avg=33270.70, stdev=1996.78 00:23:13.746 lat (usec): min=19522, max=53610, avg=33316.73, stdev=1996.45 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[22938], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:23:13.746 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:23:13.746 | 99.00th=[40109], 99.50th=[43254], 99.90th=[45876], 99.95th=[52167], 00:23:13.746 | 99.99th=[53740] 00:23:13.746 bw ( KiB/s): min= 1788, max= 1968, per=4.17%, avg=1895.37, stdev=56.45, samples=19 00:23:13.746 iops : min= 447, max= 492, avg=473.84, stdev=14.11, samples=19 00:23:13.746 lat (msec) : 20=0.04%, 50=99.87%, 100=0.08% 00:23:13.746 cpu : usr=86.51%, sys=5.63%, ctx=210, majf=0, minf=43 00:23:13.746 IO depths : 1=5.7%, 2=11.8%, 4=24.6%, 8=51.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename0: (groupid=0, jobs=1): err= 0: pid=1793143: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=471, BW=1885KiB/s (1931kB/s)(18.4MiB/10010msec) 00:23:13.746 slat (usec): min=7, max=127, avg=42.22, stdev=19.40 00:23:13.746 clat (usec): min=9319, max=63337, avg=33648.87, stdev=4369.45 00:23:13.746 lat (usec): min=9373, max=63380, avg=33691.09, stdev=4369.80 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[18482], 5.00th=[31589], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.746 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34866], 95.00th=[38536], 00:23:13.746 | 99.00th=[56886], 99.50th=[58983], 99.90th=[63177], 99.95th=[63177], 00:23:13.746 | 99.99th=[63177] 00:23:13.746 bw ( KiB/s): min= 1592, max= 1936, per=4.13%, avg=1878.89, stdev=85.34, samples=19 00:23:13.746 iops : min= 398, max= 484, avg=469.68, stdev=21.38, samples=19 00:23:13.746 lat (msec) : 10=0.19%, 20=0.93%, 50=97.16%, 100=1.72% 00:23:13.746 cpu : usr=97.89%, sys=1.59%, ctx=37, majf=0, minf=58 00:23:13.746 IO depths : 1=3.0%, 2=7.4%, 4=18.3%, 8=60.6%, 16=10.7%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=92.8%, 8=2.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename1: (groupid=0, jobs=1): err= 0: pid=1793144: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10009msec) 00:23:13.746 slat (usec): min=8, max=149, avg=41.59, stdev=14.93 00:23:13.746 clat (usec): min=11854, max=62982, avg=33434.53, stdev=2449.28 00:23:13.746 lat (usec): min=11868, max=63023, avg=33476.13, stdev=2449.46 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[31065], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:23:13.746 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:23:13.746 | 99.00th=[40109], 99.50th=[43779], 99.90th=[62653], 99.95th=[62653], 00:23:13.746 | 99.99th=[63177] 00:23:13.746 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1886.63, stdev=71.20, samples=19 00:23:13.746 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:23:13.746 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:23:13.746 cpu : usr=98.40%, sys=1.21%, ctx=13, majf=0, minf=38 00:23:13.746 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename1: (groupid=0, jobs=1): err= 0: pid=1793145: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=475, BW=1904KiB/s (1950kB/s)(18.6MiB/10011msec) 00:23:13.746 slat (usec): min=8, max=162, avg=25.36, stdev=18.67 00:23:13.746 clat (usec): min=10166, max=46857, avg=33409.94, stdev=2488.51 00:23:13.746 lat (usec): min=10184, max=46918, avg=33435.30, stdev=2489.08 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[19530], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:23:13.746 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:23:13.746 | 99.00th=[40633], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:23:13.746 | 99.99th=[46924] 00:23:13.746 bw ( KiB/s): min= 1788, max= 1923, per=4.18%, avg=1899.15, stdev=47.08, samples=20 00:23:13.746 iops : min= 447, max= 480, avg=474.75, stdev=11.75, samples=20 00:23:13.746 lat (msec) : 20=1.22%, 50=98.78% 00:23:13.746 cpu : usr=97.28%, sys=1.99%, ctx=86, majf=0, minf=57 00:23:13.746 IO depths : 1=3.1%, 2=9.0%, 4=23.9%, 8=54.6%, 16=9.4%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename1: (groupid=0, jobs=1): err= 0: pid=1793146: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10015msec) 00:23:13.746 slat (nsec): min=7962, max=99151, avg=33163.53, stdev=14166.04 00:23:13.746 clat (usec): min=22945, max=47358, avg=33559.51, stdev=1920.83 00:23:13.746 lat (usec): min=22980, max=47391, avg=33592.67, stdev=1918.54 00:23:13.746 clat percentiles (usec): 00:23:13.746 | 1.00th=[26870], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:23:13.746 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.746 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:23:13.746 | 99.00th=[41681], 99.50th=[43254], 99.90th=[45876], 99.95th=[46924], 00:23:13.746 | 99.99th=[47449] 00:23:13.746 bw ( KiB/s): min= 1788, max= 1920, per=4.15%, avg=1886.11, stdev=58.28, samples=19 00:23:13.746 iops : min= 447, max= 480, avg=471.53, stdev=14.57, samples=19 00:23:13.746 lat (msec) : 50=100.00% 00:23:13.746 cpu : usr=96.79%, sys=1.79%, ctx=37, majf=0, minf=54 00:23:13.746 IO depths : 1=5.3%, 2=11.2%, 4=24.0%, 8=52.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:23:13.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.746 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.746 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.746 filename1: (groupid=0, jobs=1): err= 0: pid=1793147: Wed Apr 24 19:53:53 2024 00:23:13.746 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10009msec) 00:23:13.746 slat (usec): min=5, max=137, avg=40.70, stdev=20.91 00:23:13.746 clat (usec): min=10012, max=73222, avg=33600.89, stdev=3116.44 00:23:13.746 lat (usec): min=10046, max=73238, avg=33641.59, stdev=3114.16 00:23:13.746 clat percentiles (usec): 00:23:13.747 | 1.00th=[29492], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:23:13.747 | 99.00th=[43254], 99.50th=[52691], 99.90th=[72877], 99.95th=[72877], 00:23:13.747 | 99.99th=[72877] 00:23:13.747 bw ( KiB/s): min= 1664, max= 1920, per=4.13%, avg=1879.58, stdev=71.83, samples=19 00:23:13.747 iops : min= 416, max= 480, avg=469.89, stdev=17.96, samples=19 00:23:13.747 lat (msec) : 20=0.32%, 50=99.15%, 100=0.53% 00:23:13.747 cpu : usr=97.31%, sys=1.84%, ctx=40, majf=0, minf=47 00:23:13.747 IO depths : 1=2.9%, 2=9.0%, 4=24.5%, 8=54.0%, 16=9.6%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename1: (groupid=0, jobs=1): err= 0: pid=1793148: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10011msec) 00:23:13.747 slat (usec): min=7, max=280, avg=27.67, stdev=21.12 00:23:13.747 clat (usec): min=5463, max=58948, avg=32450.52, stdev=4622.75 00:23:13.747 lat (usec): min=5476, max=58958, avg=32478.18, stdev=4626.86 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[10028], 5.00th=[20317], 10.00th=[31327], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:23:13.747 | 99.00th=[39584], 99.50th=[43254], 99.90th=[58459], 99.95th=[58983], 00:23:13.747 | 99.99th=[58983] 00:23:13.747 bw ( KiB/s): min= 1792, max= 2096, per=4.29%, avg=1953.00, stdev=88.33, samples=20 00:23:13.747 iops : min= 448, max= 524, avg=488.25, stdev=22.08, samples=20 00:23:13.747 lat (msec) : 10=0.98%, 20=3.49%, 50=95.24%, 100=0.29% 00:23:13.747 cpu : usr=98.15%, sys=1.40%, ctx=24, majf=0, minf=47 00:23:13.747 IO depths : 1=4.8%, 2=9.9%, 4=21.3%, 8=56.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename1: (groupid=0, jobs=1): err= 0: pid=1793149: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10007msec) 00:23:13.747 slat (usec): min=9, max=117, avg=41.23, stdev=12.50 00:23:13.747 clat (usec): min=12630, max=60533, avg=33429.23, stdev=2285.60 00:23:13.747 lat (usec): min=12644, max=60572, avg=33470.47, stdev=2286.14 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[31327], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:23:13.747 | 99.00th=[39584], 99.50th=[43779], 99.90th=[60556], 99.95th=[60556], 00:23:13.747 | 99.99th=[60556] 00:23:13.747 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1886.47, stdev=71.42, samples=19 00:23:13.747 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:23:13.747 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:23:13.747 cpu : usr=98.30%, sys=1.30%, ctx=14, majf=0, minf=43 00:23:13.747 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename1: (groupid=0, jobs=1): err= 0: pid=1793150: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=472, BW=1890KiB/s (1936kB/s)(18.5MiB/10017msec) 00:23:13.747 slat (usec): min=8, max=1213, avg=34.52, stdev=25.09 00:23:13.747 clat (usec): min=19336, max=65240, avg=33587.97, stdev=2089.53 00:23:13.747 lat (usec): min=19398, max=65261, avg=33622.49, stdev=2088.73 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[27132], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:23:13.747 | 99.00th=[42730], 99.50th=[44827], 99.90th=[50070], 99.95th=[50070], 00:23:13.747 | 99.99th=[65274] 00:23:13.747 bw ( KiB/s): min= 1788, max= 1920, per=4.15%, avg=1885.26, stdev=57.87, samples=19 00:23:13.747 iops : min= 447, max= 480, avg=471.32, stdev=14.47, samples=19 00:23:13.747 lat (msec) : 20=0.13%, 50=99.75%, 100=0.13% 00:23:13.747 cpu : usr=97.23%, sys=1.91%, ctx=143, majf=0, minf=53 00:23:13.747 IO depths : 1=2.8%, 2=8.0%, 4=21.9%, 8=57.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename1: (groupid=0, jobs=1): err= 0: pid=1793151: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=472, BW=1890KiB/s (1936kB/s)(18.5MiB/10008msec) 00:23:13.747 slat (usec): min=8, max=171, avg=39.02, stdev=18.25 00:23:13.747 clat (usec): min=9183, max=72450, avg=33542.72, stdev=3316.46 00:23:13.747 lat (usec): min=9206, max=72478, avg=33581.74, stdev=3316.48 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[23200], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:23:13.747 | 99.00th=[43779], 99.50th=[50070], 99.90th=[72877], 99.95th=[72877], 00:23:13.747 | 99.99th=[72877] 00:23:13.747 bw ( KiB/s): min= 1651, max= 1968, per=4.14%, avg=1883.95, stdev=73.53, samples=19 00:23:13.747 iops : min= 412, max= 492, avg=470.95, stdev=18.52, samples=19 00:23:13.747 lat (msec) : 10=0.08%, 20=0.51%, 50=98.82%, 100=0.59% 00:23:13.747 cpu : usr=88.88%, sys=4.84%, ctx=290, majf=0, minf=44 00:23:13.747 IO depths : 1=2.7%, 2=8.2%, 4=22.8%, 8=56.2%, 16=10.1%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename2: (groupid=0, jobs=1): err= 0: pid=1793152: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:23:13.747 slat (usec): min=8, max=642, avg=41.27, stdev=25.40 00:23:13.747 clat (usec): min=15977, max=50659, avg=33486.24, stdev=1782.74 00:23:13.747 lat (usec): min=15989, max=50684, avg=33527.52, stdev=1780.86 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[29230], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:23:13.747 | 99.00th=[40633], 99.50th=[43779], 99.90th=[49021], 99.95th=[50070], 00:23:13.747 | 99.99th=[50594] 00:23:13.747 bw ( KiB/s): min= 1788, max= 2048, per=4.15%, avg=1886.11, stdev=72.23, samples=19 00:23:13.747 iops : min= 447, max= 512, avg=471.53, stdev=18.06, samples=19 00:23:13.747 lat (msec) : 20=0.04%, 50=99.87%, 100=0.08% 00:23:13.747 cpu : usr=95.35%, sys=2.66%, ctx=115, majf=0, minf=42 00:23:13.747 IO depths : 1=4.8%, 2=11.0%, 4=24.8%, 8=51.6%, 16=7.7%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename2: (groupid=0, jobs=1): err= 0: pid=1793153: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10002msec) 00:23:13.747 slat (usec): min=7, max=137, avg=33.21, stdev=20.56 00:23:13.747 clat (usec): min=11219, max=65666, avg=33592.49, stdev=3233.84 00:23:13.747 lat (usec): min=11244, max=65700, avg=33625.70, stdev=3234.31 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[24511], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:23:13.747 | 99.00th=[44827], 99.50th=[55313], 99.90th=[65799], 99.95th=[65799], 00:23:13.747 | 99.99th=[65799] 00:23:13.747 bw ( KiB/s): min= 1664, max= 2032, per=4.15%, avg=1886.11, stdev=80.95, samples=19 00:23:13.747 iops : min= 416, max= 508, avg=471.53, stdev=20.24, samples=19 00:23:13.747 lat (msec) : 20=0.42%, 50=98.60%, 100=0.97% 00:23:13.747 cpu : usr=98.38%, sys=1.19%, ctx=16, majf=0, minf=50 00:23:13.747 IO depths : 1=5.4%, 2=11.4%, 4=24.1%, 8=51.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename2: (groupid=0, jobs=1): err= 0: pid=1793154: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.7MiB/10009msec) 00:23:13.747 slat (usec): min=5, max=153, avg=30.53, stdev=19.80 00:23:13.747 clat (usec): min=10100, max=59054, avg=33151.08, stdev=3509.16 00:23:13.747 lat (usec): min=10180, max=59071, avg=33181.61, stdev=3509.90 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[17695], 5.00th=[29230], 10.00th=[32375], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:23:13.747 | 99.00th=[42730], 99.50th=[49021], 99.90th=[58983], 99.95th=[58983], 00:23:13.747 | 99.99th=[58983] 00:23:13.747 bw ( KiB/s): min= 1808, max= 2027, per=4.22%, avg=1917.58, stdev=43.76, samples=19 00:23:13.747 iops : min= 452, max= 506, avg=479.32, stdev=10.83, samples=19 00:23:13.747 lat (msec) : 20=1.96%, 50=97.60%, 100=0.44% 00:23:13.747 cpu : usr=97.63%, sys=1.74%, ctx=93, majf=0, minf=66 00:23:13.747 IO depths : 1=4.1%, 2=9.5%, 4=22.4%, 8=55.5%, 16=8.5%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename2: (groupid=0, jobs=1): err= 0: pid=1793155: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10015msec) 00:23:13.747 slat (usec): min=8, max=151, avg=41.92, stdev=17.57 00:23:13.747 clat (usec): min=16269, max=61156, avg=33487.08, stdev=2000.79 00:23:13.747 lat (usec): min=16322, max=61210, avg=33528.99, stdev=2000.02 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[27919], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:23:13.747 | 99.00th=[42730], 99.50th=[45351], 99.90th=[52691], 99.95th=[53740], 00:23:13.747 | 99.99th=[61080] 00:23:13.747 bw ( KiB/s): min= 1788, max= 2048, per=4.15%, avg=1886.11, stdev=72.23, samples=19 00:23:13.747 iops : min= 447, max= 512, avg=471.53, stdev=18.06, samples=19 00:23:13.747 lat (msec) : 20=0.08%, 50=99.68%, 100=0.23% 00:23:13.747 cpu : usr=95.65%, sys=2.55%, ctx=219, majf=0, minf=42 00:23:13.747 IO depths : 1=5.0%, 2=10.5%, 4=22.6%, 8=54.2%, 16=7.7%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename2: (groupid=0, jobs=1): err= 0: pid=1793156: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=472, BW=1890KiB/s (1935kB/s)(18.5MiB/10015msec) 00:23:13.747 slat (usec): min=8, max=973, avg=38.98, stdev=21.45 00:23:13.747 clat (usec): min=16995, max=53219, avg=33538.46, stdev=2054.23 00:23:13.747 lat (usec): min=17005, max=53257, avg=33577.44, stdev=2053.69 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[29230], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:23:13.747 | 99.00th=[43254], 99.50th=[45876], 99.90th=[53216], 99.95th=[53216], 00:23:13.747 | 99.99th=[53216] 00:23:13.747 bw ( KiB/s): min= 1788, max= 1920, per=4.14%, avg=1884.42, stdev=57.70, samples=19 00:23:13.747 iops : min= 447, max= 480, avg=471.11, stdev=14.43, samples=19 00:23:13.747 lat (msec) : 20=0.25%, 50=99.54%, 100=0.21% 00:23:13.747 cpu : usr=97.67%, sys=1.55%, ctx=80, majf=0, minf=47 00:23:13.747 IO depths : 1=4.2%, 2=10.2%, 4=24.3%, 8=52.9%, 16=8.3%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename2: (groupid=0, jobs=1): err= 0: pid=1793157: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=470, BW=1884KiB/s (1929kB/s)(18.4MiB/10006msec) 00:23:13.747 slat (usec): min=4, max=109, avg=37.53, stdev=15.43 00:23:13.747 clat (usec): min=12781, max=77344, avg=33629.51, stdev=3278.79 00:23:13.747 lat (usec): min=12800, max=77391, avg=33667.05, stdev=3276.93 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[28443], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:23:13.747 | 99.00th=[43779], 99.50th=[53740], 99.90th=[77071], 99.95th=[77071], 00:23:13.747 | 99.99th=[77071] 00:23:13.747 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1882.74, stdev=74.00, samples=19 00:23:13.747 iops : min= 416, max= 480, avg=470.68, stdev=18.50, samples=19 00:23:13.747 lat (msec) : 20=0.02%, 50=99.36%, 100=0.62% 00:23:13.747 cpu : usr=95.97%, sys=2.27%, ctx=180, majf=0, minf=50 00:23:13.747 IO depths : 1=5.6%, 2=11.4%, 4=23.5%, 8=52.3%, 16=7.1%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename2: (groupid=0, jobs=1): err= 0: pid=1793158: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=480, BW=1920KiB/s (1966kB/s)(18.8MiB/10012msec) 00:23:13.747 slat (usec): min=7, max=254, avg=28.43, stdev=24.79 00:23:13.747 clat (usec): min=6032, max=60678, avg=33105.03, stdev=4266.34 00:23:13.747 lat (usec): min=6053, max=60694, avg=33133.46, stdev=4267.66 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[11469], 5.00th=[31327], 10.00th=[32375], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[35390], 00:23:13.747 | 99.00th=[43779], 99.50th=[55313], 99.90th=[60556], 99.95th=[60556], 00:23:13.747 | 99.99th=[60556] 00:23:13.747 bw ( KiB/s): min= 1792, max= 2043, per=4.21%, avg=1915.35, stdev=57.87, samples=20 00:23:13.747 iops : min= 448, max= 510, avg=478.80, stdev=14.38, samples=20 00:23:13.747 lat (msec) : 10=0.50%, 20=2.12%, 50=96.67%, 100=0.71% 00:23:13.747 cpu : usr=98.24%, sys=1.37%, ctx=15, majf=0, minf=55 00:23:13.747 IO depths : 1=4.1%, 2=9.5%, 4=22.6%, 8=55.2%, 16=8.7%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 filename2: (groupid=0, jobs=1): err= 0: pid=1793159: Wed Apr 24 19:53:53 2024 00:23:13.747 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10008msec) 00:23:13.747 slat (usec): min=7, max=131, avg=42.42, stdev=19.96 00:23:13.747 clat (usec): min=7953, max=60316, avg=33482.08, stdev=3537.54 00:23:13.747 lat (usec): min=7962, max=60366, avg=33524.50, stdev=3539.15 00:23:13.747 clat percentiles (usec): 00:23:13.747 | 1.00th=[19268], 5.00th=[31851], 10.00th=[32637], 20.00th=[32900], 00:23:13.747 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:23:13.747 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[36439], 00:23:13.747 | 99.00th=[45876], 99.50th=[54789], 99.90th=[60031], 99.95th=[60031], 00:23:13.747 | 99.99th=[60556] 00:23:13.747 bw ( KiB/s): min= 1667, max= 1936, per=4.15%, avg=1886.47, stdev=70.21, samples=19 00:23:13.747 iops : min= 416, max= 484, avg=471.58, stdev=17.68, samples=19 00:23:13.747 lat (msec) : 10=0.08%, 20=0.97%, 50=98.06%, 100=0.89% 00:23:13.747 cpu : usr=97.93%, sys=1.59%, ctx=41, majf=0, minf=43 00:23:13.747 IO depths : 1=1.5%, 2=6.9%, 4=22.3%, 8=57.9%, 16=11.4%, 32=0.0%, >=64=0.0% 00:23:13.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 complete : 0=0.0%, 4=93.8%, 8=0.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.747 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.747 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.747 00:23:13.747 Run status group 0 (all jobs): 00:23:13.747 READ: bw=44.4MiB/s (46.6MB/s), 1880KiB/s-1958KiB/s (1925kB/s-2005kB/s), io=445MiB (466MB), run=10002-10018msec 00:23:13.748 19:53:53 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:13.748 19:53:53 -- target/dif.sh@43 -- # local sub 00:23:13.748 19:53:53 -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.748 19:53:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:13.748 19:53:53 -- target/dif.sh@36 -- # local sub_id=0 00:23:13.748 19:53:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.748 19:53:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:13.748 19:53:53 -- target/dif.sh@36 -- # local sub_id=1 00:23:13.748 19:53:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.748 19:53:53 -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:13.748 19:53:53 -- target/dif.sh@36 -- # local sub_id=2 00:23:13.748 19:53:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@115 -- # NULL_DIF=1 00:23:13.748 19:53:53 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:13.748 19:53:53 -- target/dif.sh@115 -- # numjobs=2 00:23:13.748 19:53:53 -- target/dif.sh@115 -- # iodepth=8 00:23:13.748 19:53:53 -- target/dif.sh@115 -- # runtime=5 00:23:13.748 19:53:53 -- target/dif.sh@115 -- # files=1 00:23:13.748 19:53:53 -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:13.748 19:53:53 -- target/dif.sh@28 -- # local sub 00:23:13.748 19:53:53 -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.748 19:53:53 -- target/dif.sh@31 -- # create_subsystem 0 00:23:13.748 19:53:53 -- target/dif.sh@18 -- # local sub_id=0 00:23:13.748 19:53:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 bdev_null0 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 [2024-04-24 19:53:53.871890] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.748 19:53:53 -- target/dif.sh@31 -- # create_subsystem 1 00:23:13.748 19:53:53 -- target/dif.sh@18 -- # local sub_id=1 00:23:13.748 19:53:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 bdev_null1 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.748 19:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.748 19:53:53 -- common/autotest_common.sh@10 -- # set +x 00:23:13.748 19:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.748 19:53:53 -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:13.748 19:53:53 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:13.748 19:53:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:13.748 19:53:53 -- nvmf/common.sh@521 -- # config=() 00:23:13.748 19:53:53 -- nvmf/common.sh@521 -- # local subsystem config 00:23:13.748 19:53:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:13.748 19:53:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.748 19:53:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:13.748 { 00:23:13.748 "params": { 00:23:13.748 "name": "Nvme$subsystem", 00:23:13.748 "trtype": "$TEST_TRANSPORT", 00:23:13.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.748 "adrfam": "ipv4", 00:23:13.748 "trsvcid": "$NVMF_PORT", 00:23:13.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.748 "hdgst": ${hdgst:-false}, 00:23:13.748 "ddgst": ${ddgst:-false} 00:23:13.748 }, 00:23:13.748 "method": "bdev_nvme_attach_controller" 00:23:13.748 } 00:23:13.748 EOF 00:23:13.748 )") 00:23:13.748 19:53:53 -- target/dif.sh@82 -- # gen_fio_conf 00:23:13.748 19:53:53 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.748 19:53:53 -- target/dif.sh@54 -- # local file 00:23:13.748 19:53:53 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:13.748 19:53:53 -- target/dif.sh@56 -- # cat 00:23:13.748 19:53:53 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:13.748 19:53:53 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:13.748 19:53:53 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:13.748 19:53:53 -- common/autotest_common.sh@1327 -- # shift 00:23:13.748 19:53:53 -- nvmf/common.sh@543 -- # cat 00:23:13.748 19:53:53 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:13.748 19:53:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.748 19:53:53 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:13.748 19:53:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:13.748 19:53:53 -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.748 19:53:53 -- target/dif.sh@73 -- # cat 00:23:13.748 19:53:53 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:13.748 19:53:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:13.748 19:53:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:13.748 19:53:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:13.748 { 00:23:13.748 "params": { 00:23:13.748 "name": "Nvme$subsystem", 00:23:13.748 "trtype": "$TEST_TRANSPORT", 00:23:13.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.748 "adrfam": "ipv4", 00:23:13.748 "trsvcid": "$NVMF_PORT", 00:23:13.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.748 "hdgst": ${hdgst:-false}, 00:23:13.748 "ddgst": ${ddgst:-false} 00:23:13.748 }, 00:23:13.748 "method": "bdev_nvme_attach_controller" 00:23:13.748 } 00:23:13.748 EOF 00:23:13.748 )") 00:23:13.748 19:53:53 -- nvmf/common.sh@543 -- # cat 00:23:13.748 19:53:53 -- target/dif.sh@72 -- # (( file++ )) 00:23:13.748 19:53:53 -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.748 19:53:53 -- nvmf/common.sh@545 -- # jq . 00:23:13.748 19:53:53 -- nvmf/common.sh@546 -- # IFS=, 00:23:13.748 19:53:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:13.748 "params": { 00:23:13.748 "name": "Nvme0", 00:23:13.748 "trtype": "tcp", 00:23:13.748 "traddr": "10.0.0.2", 00:23:13.748 "adrfam": "ipv4", 00:23:13.748 "trsvcid": "4420", 00:23:13.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:13.748 "hdgst": false, 00:23:13.748 "ddgst": false 00:23:13.748 }, 00:23:13.748 "method": "bdev_nvme_attach_controller" 00:23:13.748 },{ 00:23:13.748 "params": { 00:23:13.748 "name": "Nvme1", 00:23:13.748 "trtype": "tcp", 00:23:13.748 "traddr": "10.0.0.2", 00:23:13.748 "adrfam": "ipv4", 00:23:13.748 "trsvcid": "4420", 00:23:13.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.748 "hdgst": false, 00:23:13.748 "ddgst": false 00:23:13.748 }, 00:23:13.748 "method": "bdev_nvme_attach_controller" 00:23:13.748 }' 00:23:13.748 19:53:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:13.748 19:53:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:13.748 19:53:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.748 19:53:53 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:13.748 19:53:53 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:13.748 19:53:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:13.748 19:53:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:13.748 19:53:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:13.748 19:53:53 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:13.748 19:53:53 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.748 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:13.748 ... 00:23:13.748 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:13.748 ... 00:23:13.748 fio-3.35 00:23:13.748 Starting 4 threads 00:23:13.748 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.013 00:23:19.013 filename0: (groupid=0, jobs=1): err= 0: pid=1794429: Wed Apr 24 19:53:59 2024 00:23:19.013 read: IOPS=1909, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5004msec) 00:23:19.013 slat (nsec): min=6182, max=53754, avg=11470.68, stdev=5001.74 00:23:19.013 clat (usec): min=1497, max=7735, avg=4155.08, stdev=666.40 00:23:19.013 lat (usec): min=1518, max=7743, avg=4166.55, stdev=666.01 00:23:19.013 clat percentiles (usec): 00:23:19.013 | 1.00th=[ 2966], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3687], 00:23:19.013 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4146], 00:23:19.013 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 5276], 95.00th=[ 5669], 00:23:19.013 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 6915], 99.95th=[ 7177], 00:23:19.013 | 99.99th=[ 7767] 00:23:19.013 bw ( KiB/s): min=14784, max=15840, per=25.54%, avg=15281.60, stdev=373.37, samples=10 00:23:19.013 iops : min= 1848, max= 1980, avg=1910.20, stdev=46.67, samples=10 00:23:19.013 lat (msec) : 2=0.01%, 4=45.56%, 10=54.43% 00:23:19.013 cpu : usr=94.66%, sys=4.84%, ctx=20, majf=0, minf=10 00:23:19.013 IO depths : 1=0.1%, 2=1.2%, 4=69.8%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:19.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.013 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.013 issued rwts: total=9554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.014 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:19.014 filename0: (groupid=0, jobs=1): err= 0: pid=1794430: Wed Apr 24 19:53:59 2024 00:23:19.014 read: IOPS=1855, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5001msec) 00:23:19.014 slat (nsec): min=5413, max=53845, avg=12631.22, stdev=6093.57 00:23:19.014 clat (usec): min=1521, max=7372, avg=4275.86, stdev=606.32 00:23:19.014 lat (usec): min=1528, max=7380, avg=4288.49, stdev=605.83 00:23:19.014 clat percentiles (usec): 00:23:19.014 | 1.00th=[ 2868], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3916], 00:23:19.014 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4293], 00:23:19.014 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5604], 00:23:19.014 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 6849], 99.95th=[ 6915], 00:23:19.014 | 99.99th=[ 7373] 00:23:19.014 bw ( KiB/s): min=13824, max=15600, per=24.75%, avg=14808.89, stdev=578.06, samples=9 00:23:19.014 iops : min= 1728, max= 1950, avg=1851.11, stdev=72.26, samples=9 00:23:19.014 lat (msec) : 2=0.03%, 4=29.60%, 10=70.37% 00:23:19.014 cpu : usr=94.40%, sys=5.10%, ctx=7, majf=0, minf=9 00:23:19.014 IO depths : 1=0.1%, 2=1.2%, 4=67.5%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:19.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.014 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.014 issued rwts: total=9281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.014 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:19.014 filename1: (groupid=0, jobs=1): err= 0: pid=1794431: Wed Apr 24 19:53:59 2024 00:23:19.014 read: IOPS=1852, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5004msec) 00:23:19.014 slat (nsec): min=6358, max=54771, avg=15506.53, stdev=7546.94 00:23:19.014 clat (usec): min=2561, max=8413, avg=4271.14, stdev=770.19 00:23:19.014 lat (usec): min=2584, max=8423, avg=4286.65, stdev=768.72 00:23:19.014 clat percentiles (usec): 00:23:19.014 | 1.00th=[ 3130], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3720], 00:23:19.014 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4178], 00:23:19.014 | 70.00th=[ 4228], 80.00th=[ 4621], 90.00th=[ 5604], 95.00th=[ 5997], 00:23:19.014 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7570], 99.95th=[ 7898], 00:23:19.014 | 99.99th=[ 8455] 00:23:19.014 bw ( KiB/s): min=14060, max=15888, per=24.77%, avg=14818.80, stdev=605.78, samples=10 00:23:19.014 iops : min= 1757, max= 1986, avg=1852.30, stdev=75.79, samples=10 00:23:19.014 lat (msec) : 4=42.05%, 10=57.95% 00:23:19.014 cpu : usr=95.70%, sys=3.78%, ctx=6, majf=0, minf=2 00:23:19.014 IO depths : 1=0.1%, 2=0.4%, 4=72.5%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:19.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.014 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.014 issued rwts: total=9268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.014 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:19.014 filename1: (groupid=0, jobs=1): err= 0: pid=1794432: Wed Apr 24 19:53:59 2024 00:23:19.014 read: IOPS=1862, BW=14.6MiB/s (15.3MB/s)(72.8MiB/5001msec) 00:23:19.014 slat (nsec): min=5015, max=53864, avg=12950.41, stdev=5992.05 00:23:19.014 clat (usec): min=1787, max=45776, avg=4261.84, stdev=1325.79 00:23:19.014 lat (usec): min=1795, max=45791, avg=4274.79, stdev=1325.47 00:23:19.014 clat percentiles (usec): 00:23:19.014 | 1.00th=[ 2933], 5.00th=[ 3392], 10.00th=[ 3654], 20.00th=[ 3884], 00:23:19.014 | 30.00th=[ 3982], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4293], 00:23:19.014 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5080], 00:23:19.014 | 99.00th=[ 6063], 99.50th=[ 6456], 99.90th=[ 7373], 99.95th=[45876], 00:23:19.014 | 99.99th=[45876] 00:23:19.014 bw ( KiB/s): min=13301, max=15664, per=24.78%, avg=14823.67, stdev=815.82, samples=9 00:23:19.014 iops : min= 1662, max= 1958, avg=1852.89, stdev=102.12, samples=9 00:23:19.014 lat (msec) : 2=0.05%, 4=30.41%, 10=69.45%, 50=0.09% 00:23:19.014 cpu : usr=94.66%, sys=4.82%, ctx=6, majf=0, minf=0 00:23:19.014 IO depths : 1=0.1%, 2=1.0%, 4=65.4%, 8=33.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:19.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.014 complete : 0=0.0%, 4=97.1%, 8=2.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:19.014 issued rwts: total=9315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:19.014 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:19.014 00:23:19.014 Run status group 0 (all jobs): 00:23:19.014 READ: bw=58.4MiB/s (61.3MB/s), 14.5MiB/s-14.9MiB/s (15.2MB/s-15.6MB/s), io=292MiB (307MB), run=5001-5004msec 00:23:19.014 19:54:00 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:19.014 19:54:00 -- target/dif.sh@43 -- # local sub 00:23:19.014 19:54:00 -- target/dif.sh@45 -- # for sub in "$@" 00:23:19.014 19:54:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:19.014 19:54:00 -- target/dif.sh@36 -- # local sub_id=0 00:23:19.014 19:54:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:19.014 19:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.014 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.014 19:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.014 19:54:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:19.014 19:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.014 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.014 19:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.014 19:54:00 -- target/dif.sh@45 -- # for sub in "$@" 00:23:19.014 19:54:00 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:19.014 19:54:00 -- target/dif.sh@36 -- # local sub_id=1 00:23:19.014 19:54:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:19.014 19:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.014 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.014 19:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.014 19:54:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:19.014 19:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.014 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.014 19:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.014 00:23:19.014 real 0m24.323s 00:23:19.014 user 4m29.584s 00:23:19.014 sys 0m7.543s 00:23:19.014 19:54:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:19.014 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.014 ************************************ 00:23:19.014 END TEST fio_dif_rand_params 00:23:19.014 ************************************ 00:23:19.014 19:54:00 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:19.014 19:54:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:19.014 19:54:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:19.014 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.014 ************************************ 00:23:19.014 START TEST fio_dif_digest 00:23:19.014 ************************************ 00:23:19.014 19:54:00 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:23:19.014 19:54:00 -- target/dif.sh@123 -- # local NULL_DIF 00:23:19.014 19:54:00 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:19.014 19:54:00 -- target/dif.sh@125 -- # local hdgst ddgst 00:23:19.014 19:54:00 -- target/dif.sh@127 -- # NULL_DIF=3 00:23:19.014 19:54:00 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:19.014 19:54:00 -- target/dif.sh@127 -- # numjobs=3 00:23:19.014 19:54:00 -- target/dif.sh@127 -- # iodepth=3 00:23:19.014 19:54:00 -- target/dif.sh@127 -- # runtime=10 00:23:19.014 19:54:00 -- target/dif.sh@128 -- # hdgst=true 00:23:19.014 19:54:00 -- target/dif.sh@128 -- # ddgst=true 00:23:19.014 19:54:00 -- target/dif.sh@130 -- # create_subsystems 0 00:23:19.014 19:54:00 -- target/dif.sh@28 -- # local sub 00:23:19.014 19:54:00 -- target/dif.sh@30 -- # for sub in "$@" 00:23:19.014 19:54:00 -- target/dif.sh@31 -- # create_subsystem 0 00:23:19.014 19:54:00 -- target/dif.sh@18 -- # local sub_id=0 00:23:19.014 19:54:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:19.014 19:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.014 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.014 bdev_null0 00:23:19.014 19:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.015 19:54:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:19.015 19:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.015 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.015 19:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.015 19:54:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:19.015 19:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.015 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.015 19:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.015 19:54:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:19.015 19:54:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.015 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:23:19.015 [2024-04-24 19:54:00.377731] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.015 19:54:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.015 19:54:00 -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:19.015 19:54:00 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:19.015 19:54:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:19.015 19:54:00 -- nvmf/common.sh@521 -- # config=() 00:23:19.015 19:54:00 -- nvmf/common.sh@521 -- # local subsystem config 00:23:19.015 19:54:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:19.015 19:54:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:19.015 { 00:23:19.015 "params": { 00:23:19.015 "name": "Nvme$subsystem", 00:23:19.015 "trtype": "$TEST_TRANSPORT", 00:23:19.015 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.015 "adrfam": "ipv4", 00:23:19.015 "trsvcid": "$NVMF_PORT", 00:23:19.015 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.015 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.015 "hdgst": ${hdgst:-false}, 00:23:19.015 "ddgst": ${ddgst:-false} 00:23:19.015 }, 00:23:19.015 "method": "bdev_nvme_attach_controller" 00:23:19.015 } 00:23:19.015 EOF 00:23:19.015 )") 00:23:19.015 19:54:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:19.015 19:54:00 -- target/dif.sh@82 -- # gen_fio_conf 00:23:19.015 19:54:00 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:19.015 19:54:00 -- target/dif.sh@54 -- # local file 00:23:19.015 19:54:00 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:19.015 19:54:00 -- target/dif.sh@56 -- # cat 00:23:19.015 19:54:00 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:19.015 19:54:00 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:19.015 19:54:00 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:19.015 19:54:00 -- common/autotest_common.sh@1327 -- # shift 00:23:19.015 19:54:00 -- nvmf/common.sh@543 -- # cat 00:23:19.015 19:54:00 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:19.015 19:54:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:19.015 19:54:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:19.015 19:54:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:19.015 19:54:00 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:19.015 19:54:00 -- target/dif.sh@72 -- # (( file <= files )) 00:23:19.015 19:54:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:19.015 19:54:00 -- nvmf/common.sh@545 -- # jq . 00:23:19.015 19:54:00 -- nvmf/common.sh@546 -- # IFS=, 00:23:19.015 19:54:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:19.015 "params": { 00:23:19.015 "name": "Nvme0", 00:23:19.015 "trtype": "tcp", 00:23:19.015 "traddr": "10.0.0.2", 00:23:19.015 "adrfam": "ipv4", 00:23:19.015 "trsvcid": "4420", 00:23:19.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:19.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:19.015 "hdgst": true, 00:23:19.015 "ddgst": true 00:23:19.015 }, 00:23:19.015 "method": "bdev_nvme_attach_controller" 00:23:19.015 }' 00:23:19.015 19:54:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:19.015 19:54:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:19.015 19:54:00 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:19.015 19:54:00 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:19.015 19:54:00 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:19.015 19:54:00 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:19.015 19:54:00 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:19.015 19:54:00 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:19.015 19:54:00 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:19.015 19:54:00 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:19.273 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:19.274 ... 00:23:19.274 fio-3.35 00:23:19.274 Starting 3 threads 00:23:19.274 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.472 00:23:31.472 filename0: (groupid=0, jobs=1): err= 0: pid=1795370: Wed Apr 24 19:54:11 2024 00:23:31.472 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(243MiB/10045msec) 00:23:31.472 slat (nsec): min=6162, max=54575, avg=17389.68, stdev=4320.53 00:23:31.472 clat (usec): min=7986, max=58853, avg=15465.33, stdev=4657.34 00:23:31.472 lat (usec): min=7999, max=58870, avg=15482.72, stdev=4657.23 00:23:31.472 clat percentiles (usec): 00:23:31.472 | 1.00th=[10028], 5.00th=[13173], 10.00th=[13698], 20.00th=[14091], 00:23:31.472 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:23:31.472 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16581], 95.00th=[16909], 00:23:31.472 | 99.00th=[55837], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:23:31.472 | 99.99th=[58983] 00:23:31.472 bw ( KiB/s): min=22016, max=28160, per=30.98%, avg=24834.45, stdev=1535.61, samples=20 00:23:31.472 iops : min= 172, max= 220, avg=194.00, stdev=12.00, samples=20 00:23:31.472 lat (msec) : 10=0.98%, 20=97.68%, 50=0.26%, 100=1.08% 00:23:31.472 cpu : usr=86.62%, sys=10.63%, ctx=760, majf=0, minf=92 00:23:31.472 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.472 issued rwts: total=1943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.472 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:31.472 filename0: (groupid=0, jobs=1): err= 0: pid=1795371: Wed Apr 24 19:54:11 2024 00:23:31.472 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(273MiB/10043msec) 00:23:31.472 slat (nsec): min=5096, max=36386, avg=15044.08, stdev=3790.20 00:23:31.472 clat (usec): min=8501, max=55095, avg=13769.97, stdev=2268.59 00:23:31.472 lat (usec): min=8515, max=55110, avg=13785.01, stdev=2268.55 00:23:31.472 clat percentiles (usec): 00:23:31.472 | 1.00th=[ 9634], 5.00th=[11076], 10.00th=[12256], 20.00th=[12911], 00:23:31.472 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:23:31.472 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:23:31.472 | 99.00th=[16450], 99.50th=[17171], 99.90th=[53740], 99.95th=[54789], 00:23:31.472 | 99.99th=[55313] 00:23:31.472 bw ( KiB/s): min=25088, max=29952, per=34.81%, avg=27904.00, stdev=1040.71, samples=20 00:23:31.472 iops : min= 196, max= 234, avg=218.00, stdev= 8.13, samples=20 00:23:31.472 lat (msec) : 10=2.15%, 20=97.62%, 50=0.05%, 100=0.18% 00:23:31.472 cpu : usr=87.41%, sys=9.60%, ctx=725, majf=0, minf=130 00:23:31.472 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.472 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.472 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:31.472 filename0: (groupid=0, jobs=1): err= 0: pid=1795372: Wed Apr 24 19:54:11 2024 00:23:31.472 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(271MiB/10022msec) 00:23:31.472 slat (nsec): min=4896, max=35504, avg=14377.34, stdev=2785.63 00:23:31.472 clat (usec): min=8973, max=56528, avg=13862.43, stdev=2891.22 00:23:31.472 lat (usec): min=8997, max=56553, avg=13876.81, stdev=2891.30 00:23:31.472 clat percentiles (usec): 00:23:31.472 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[12387], 20.00th=[12911], 00:23:31.472 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:23:31.472 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15533], 00:23:31.472 | 99.00th=[16450], 99.50th=[18220], 99.90th=[55313], 99.95th=[55837], 00:23:31.472 | 99.99th=[56361] 00:23:31.472 bw ( KiB/s): min=23552, max=29440, per=34.54%, avg=27686.40, stdev=1166.58, samples=20 00:23:31.472 iops : min= 184, max= 230, avg=216.30, stdev= 9.11, samples=20 00:23:31.472 lat (msec) : 10=1.71%, 20=97.83%, 50=0.09%, 100=0.37% 00:23:31.472 cpu : usr=88.89%, sys=8.80%, ctx=549, majf=0, minf=155 00:23:31.472 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:31.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:31.472 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:31.472 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:31.472 00:23:31.472 Run status group 0 (all jobs): 00:23:31.472 READ: bw=78.3MiB/s (82.1MB/s), 24.2MiB/s-27.2MiB/s (25.4MB/s-28.5MB/s), io=786MiB (825MB), run=10022-10045msec 00:23:31.472 19:54:11 -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:31.472 19:54:11 -- target/dif.sh@43 -- # local sub 00:23:31.472 19:54:11 -- target/dif.sh@45 -- # for sub in "$@" 00:23:31.472 19:54:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:31.472 19:54:11 -- target/dif.sh@36 -- # local sub_id=0 00:23:31.472 19:54:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:31.472 19:54:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.472 19:54:11 -- common/autotest_common.sh@10 -- # set +x 00:23:31.472 19:54:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.472 19:54:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:31.472 19:54:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.472 19:54:11 -- common/autotest_common.sh@10 -- # set +x 00:23:31.472 19:54:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.472 00:23:31.472 real 0m11.077s 00:23:31.472 user 0m27.589s 00:23:31.472 sys 0m3.188s 00:23:31.472 19:54:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:31.472 19:54:11 -- common/autotest_common.sh@10 -- # set +x 00:23:31.472 ************************************ 00:23:31.472 END TEST fio_dif_digest 00:23:31.472 ************************************ 00:23:31.472 19:54:11 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:31.472 19:54:11 -- target/dif.sh@147 -- # nvmftestfini 00:23:31.472 19:54:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:31.472 19:54:11 -- nvmf/common.sh@117 -- # sync 00:23:31.472 19:54:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.472 19:54:11 -- nvmf/common.sh@120 -- # set +e 00:23:31.472 19:54:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.472 19:54:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.472 rmmod nvme_tcp 00:23:31.472 rmmod nvme_fabrics 00:23:31.472 rmmod nvme_keyring 00:23:31.472 19:54:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.472 19:54:11 -- nvmf/common.sh@124 -- # set -e 00:23:31.472 19:54:11 -- nvmf/common.sh@125 -- # return 0 00:23:31.472 19:54:11 -- nvmf/common.sh@478 -- # '[' -n 1789096 ']' 00:23:31.472 19:54:11 -- nvmf/common.sh@479 -- # killprocess 1789096 00:23:31.472 19:54:11 -- common/autotest_common.sh@936 -- # '[' -z 1789096 ']' 00:23:31.472 19:54:11 -- common/autotest_common.sh@940 -- # kill -0 1789096 00:23:31.472 19:54:11 -- common/autotest_common.sh@941 -- # uname 00:23:31.472 19:54:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.472 19:54:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1789096 00:23:31.472 19:54:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:31.472 19:54:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:31.472 19:54:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1789096' 00:23:31.472 killing process with pid 1789096 00:23:31.472 19:54:11 -- common/autotest_common.sh@955 -- # kill 1789096 00:23:31.472 19:54:11 -- common/autotest_common.sh@960 -- # wait 1789096 00:23:31.472 19:54:11 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:23:31.472 19:54:11 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:31.472 Waiting for block devices as requested 00:23:31.472 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:31.730 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:31.730 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:31.730 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:31.988 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:31.988 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:31.988 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:31.988 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:32.246 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:32.246 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:32.246 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:32.246 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:32.502 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:32.502 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:32.502 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:32.502 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:32.759 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:32.759 19:54:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:32.759 19:54:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:32.759 19:54:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:32.759 19:54:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:32.759 19:54:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.759 19:54:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:32.759 19:54:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.287 19:54:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.287 00:23:35.287 real 1m7.043s 00:23:35.287 user 6m24.779s 00:23:35.287 sys 0m20.279s 00:23:35.287 19:54:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:35.287 19:54:16 -- common/autotest_common.sh@10 -- # set +x 00:23:35.287 ************************************ 00:23:35.287 END TEST nvmf_dif 00:23:35.287 ************************************ 00:23:35.288 19:54:16 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:35.288 19:54:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:35.288 19:54:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:35.288 19:54:16 -- common/autotest_common.sh@10 -- # set +x 00:23:35.288 ************************************ 00:23:35.288 START TEST nvmf_abort_qd_sizes 00:23:35.288 ************************************ 00:23:35.288 19:54:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:35.288 * Looking for test storage... 00:23:35.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:35.288 19:54:16 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.288 19:54:16 -- nvmf/common.sh@7 -- # uname -s 00:23:35.288 19:54:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.288 19:54:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.288 19:54:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.288 19:54:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.288 19:54:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.288 19:54:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.288 19:54:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.288 19:54:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.288 19:54:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.288 19:54:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.288 19:54:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:35.288 19:54:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:35.288 19:54:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.288 19:54:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.288 19:54:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.288 19:54:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.288 19:54:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.288 19:54:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.288 19:54:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.288 19:54:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.288 19:54:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.288 19:54:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.288 19:54:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.288 19:54:16 -- paths/export.sh@5 -- # export PATH 00:23:35.288 19:54:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.288 19:54:16 -- nvmf/common.sh@47 -- # : 0 00:23:35.288 19:54:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.288 19:54:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.288 19:54:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.288 19:54:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.288 19:54:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.288 19:54:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.288 19:54:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.288 19:54:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.288 19:54:16 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:35.288 19:54:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:35.288 19:54:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.288 19:54:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:35.288 19:54:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:35.288 19:54:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:35.288 19:54:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.288 19:54:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:35.288 19:54:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.288 19:54:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:35.288 19:54:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:35.288 19:54:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.288 19:54:16 -- common/autotest_common.sh@10 -- # set +x 00:23:37.271 19:54:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:37.271 19:54:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:37.271 19:54:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:37.271 19:54:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:37.271 19:54:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:37.271 19:54:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:37.271 19:54:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:37.271 19:54:18 -- nvmf/common.sh@295 -- # net_devs=() 00:23:37.271 19:54:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:37.271 19:54:18 -- nvmf/common.sh@296 -- # e810=() 00:23:37.271 19:54:18 -- nvmf/common.sh@296 -- # local -ga e810 00:23:37.271 19:54:18 -- nvmf/common.sh@297 -- # x722=() 00:23:37.271 19:54:18 -- nvmf/common.sh@297 -- # local -ga x722 00:23:37.271 19:54:18 -- nvmf/common.sh@298 -- # mlx=() 00:23:37.271 19:54:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:37.271 19:54:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.271 19:54:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:37.271 19:54:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:37.271 19:54:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:37.271 19:54:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.271 19:54:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:37.271 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:37.271 19:54:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:37.271 19:54:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:37.271 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:37.271 19:54:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:37.271 19:54:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.271 19:54:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.271 19:54:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:37.271 19:54:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.271 19:54:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:37.271 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:37.271 19:54:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.271 19:54:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:37.271 19:54:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.271 19:54:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:37.271 19:54:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.271 19:54:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:37.271 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:37.271 19:54:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.271 19:54:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:37.271 19:54:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:37.271 19:54:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:37.271 19:54:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:37.271 19:54:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.271 19:54:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.271 19:54:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.271 19:54:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:37.271 19:54:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.271 19:54:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.271 19:54:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:37.271 19:54:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.271 19:54:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.271 19:54:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:37.271 19:54:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:37.271 19:54:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.271 19:54:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.271 19:54:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.271 19:54:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.271 19:54:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:37.271 19:54:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.271 19:54:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.271 19:54:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.271 19:54:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:37.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:23:37.271 00:23:37.271 --- 10.0.0.2 ping statistics --- 00:23:37.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.271 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:37.271 19:54:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:23:37.271 00:23:37.271 --- 10.0.0.1 ping statistics --- 00:23:37.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.271 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:23:37.271 19:54:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.271 19:54:18 -- nvmf/common.sh@411 -- # return 0 00:23:37.271 19:54:18 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:23:37.271 19:54:18 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:38.206 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:38.206 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:38.206 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:38.206 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:38.206 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:38.206 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:38.206 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:38.206 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:38.206 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:38.206 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:38.206 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:38.206 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:38.206 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:38.206 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:38.463 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:38.463 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:39.399 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:39.399 19:54:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.399 19:54:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:39.399 19:54:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:39.399 19:54:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.399 19:54:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:39.399 19:54:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:39.399 19:54:20 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:39.399 19:54:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:39.399 19:54:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:39.399 19:54:20 -- common/autotest_common.sh@10 -- # set +x 00:23:39.399 19:54:20 -- nvmf/common.sh@470 -- # nvmfpid=1800727 00:23:39.399 19:54:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:39.399 19:54:20 -- nvmf/common.sh@471 -- # waitforlisten 1800727 00:23:39.399 19:54:20 -- common/autotest_common.sh@817 -- # '[' -z 1800727 ']' 00:23:39.399 19:54:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.399 19:54:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:39.399 19:54:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.399 19:54:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:39.399 19:54:20 -- common/autotest_common.sh@10 -- # set +x 00:23:39.399 [2024-04-24 19:54:20.859134] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:23:39.399 [2024-04-24 19:54:20.859212] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.399 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.658 [2024-04-24 19:54:20.930274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.658 [2024-04-24 19:54:21.053841] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.658 [2024-04-24 19:54:21.053903] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.658 [2024-04-24 19:54:21.053917] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.658 [2024-04-24 19:54:21.053929] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.658 [2024-04-24 19:54:21.053954] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.658 [2024-04-24 19:54:21.057653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.658 [2024-04-24 19:54:21.057702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.658 [2024-04-24 19:54:21.057794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.658 [2024-04-24 19:54:21.057797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.916 19:54:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:39.916 19:54:21 -- common/autotest_common.sh@850 -- # return 0 00:23:39.916 19:54:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:39.916 19:54:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:39.916 19:54:21 -- common/autotest_common.sh@10 -- # set +x 00:23:39.916 19:54:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.916 19:54:21 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:39.916 19:54:21 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:39.916 19:54:21 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:39.916 19:54:21 -- scripts/common.sh@309 -- # local bdf bdfs 00:23:39.916 19:54:21 -- scripts/common.sh@310 -- # local nvmes 00:23:39.916 19:54:21 -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:23:39.916 19:54:21 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:23:39.916 19:54:21 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:39.916 19:54:21 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:23:39.916 19:54:21 -- scripts/common.sh@320 -- # uname -s 00:23:39.916 19:54:21 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:39.916 19:54:21 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:39.916 19:54:21 -- scripts/common.sh@325 -- # (( 1 )) 00:23:39.916 19:54:21 -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:23:39.916 19:54:21 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:23:39.916 19:54:21 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:23:39.916 19:54:21 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:39.916 19:54:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:39.916 19:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.916 19:54:21 -- common/autotest_common.sh@10 -- # set +x 00:23:39.916 ************************************ 00:23:39.916 START TEST spdk_target_abort 00:23:39.916 ************************************ 00:23:39.916 19:54:21 -- common/autotest_common.sh@1111 -- # spdk_target 00:23:39.916 19:54:21 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:39.916 19:54:21 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:23:39.916 19:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.916 19:54:21 -- common/autotest_common.sh@10 -- # set +x 00:23:43.197 spdk_targetn1 00:23:43.197 19:54:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.197 19:54:24 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.197 19:54:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.197 19:54:24 -- common/autotest_common.sh@10 -- # set +x 00:23:43.197 [2024-04-24 19:54:24.131995] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.197 19:54:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.197 19:54:24 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:43.197 19:54:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.197 19:54:24 -- common/autotest_common.sh@10 -- # set +x 00:23:43.197 19:54:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.197 19:54:24 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:43.197 19:54:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.197 19:54:24 -- common/autotest_common.sh@10 -- # set +x 00:23:43.197 19:54:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.197 19:54:24 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:43.198 19:54:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.198 19:54:24 -- common/autotest_common.sh@10 -- # set +x 00:23:43.198 [2024-04-24 19:54:24.164242] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.198 19:54:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:43.198 19:54:24 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:43.198 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.480 Initializing NVMe Controllers 00:23:46.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:46.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:46.480 Initialization complete. Launching workers. 00:23:46.480 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8938, failed: 0 00:23:46.480 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1244, failed to submit 7694 00:23:46.480 success 828, unsuccess 416, failed 0 00:23:46.480 19:54:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:46.480 19:54:27 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:46.480 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.763 Initializing NVMe Controllers 00:23:49.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:49.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:49.763 Initialization complete. Launching workers. 00:23:49.763 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8664, failed: 0 00:23:49.763 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1286, failed to submit 7378 00:23:49.763 success 336, unsuccess 950, failed 0 00:23:49.763 19:54:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:49.763 19:54:30 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:49.763 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.046 Initializing NVMe Controllers 00:23:53.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:53.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:53.046 Initialization complete. Launching workers. 00:23:53.046 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31292, failed: 0 00:23:53.046 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2804, failed to submit 28488 00:23:53.046 success 511, unsuccess 2293, failed 0 00:23:53.046 19:54:33 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:53.046 19:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.046 19:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:53.046 19:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.046 19:54:33 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:53.046 19:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.046 19:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:53.978 19:54:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.978 19:54:35 -- target/abort_qd_sizes.sh@61 -- # killprocess 1800727 00:23:53.978 19:54:35 -- common/autotest_common.sh@936 -- # '[' -z 1800727 ']' 00:23:53.978 19:54:35 -- common/autotest_common.sh@940 -- # kill -0 1800727 00:23:53.978 19:54:35 -- common/autotest_common.sh@941 -- # uname 00:23:53.978 19:54:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:53.978 19:54:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1800727 00:23:53.978 19:54:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:53.978 19:54:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:53.978 19:54:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1800727' 00:23:53.978 killing process with pid 1800727 00:23:53.978 19:54:35 -- common/autotest_common.sh@955 -- # kill 1800727 00:23:53.978 19:54:35 -- common/autotest_common.sh@960 -- # wait 1800727 00:23:54.236 00:23:54.236 real 0m14.309s 00:23:54.236 user 0m54.478s 00:23:54.236 sys 0m2.548s 00:23:54.236 19:54:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:54.236 19:54:35 -- common/autotest_common.sh@10 -- # set +x 00:23:54.236 ************************************ 00:23:54.236 END TEST spdk_target_abort 00:23:54.236 ************************************ 00:23:54.236 19:54:35 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:54.236 19:54:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:54.236 19:54:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:54.236 19:54:35 -- common/autotest_common.sh@10 -- # set +x 00:23:54.236 ************************************ 00:23:54.236 START TEST kernel_target_abort 00:23:54.236 ************************************ 00:23:54.236 19:54:35 -- common/autotest_common.sh@1111 -- # kernel_target 00:23:54.236 19:54:35 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:54.236 19:54:35 -- nvmf/common.sh@717 -- # local ip 00:23:54.236 19:54:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:54.236 19:54:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:54.236 19:54:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.236 19:54:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.236 19:54:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:54.236 19:54:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.237 19:54:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:54.237 19:54:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:54.237 19:54:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:54.237 19:54:35 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:54.237 19:54:35 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:54.237 19:54:35 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:54.237 19:54:35 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:54.237 19:54:35 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:54.237 19:54:35 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:54.237 19:54:35 -- nvmf/common.sh@628 -- # local block nvme 00:23:54.237 19:54:35 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:54.237 19:54:35 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:54.495 19:54:35 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:54.495 19:54:35 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:55.453 Waiting for block devices as requested 00:23:55.453 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:55.453 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:55.714 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:55.714 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:55.714 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:55.973 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:55.973 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:55.973 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:55.973 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:55.973 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:56.230 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:56.230 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:56.230 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:56.230 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:56.488 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:56.488 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:56.488 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:56.746 19:54:38 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:56.747 19:54:38 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:56.747 19:54:38 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:56.747 19:54:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:56.747 19:54:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:56.747 19:54:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:56.747 19:54:38 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:56.747 19:54:38 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:56.747 19:54:38 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:56.747 No valid GPT data, bailing 00:23:56.747 19:54:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:56.747 19:54:38 -- scripts/common.sh@391 -- # pt= 00:23:56.747 19:54:38 -- scripts/common.sh@392 -- # return 1 00:23:56.747 19:54:38 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:56.747 19:54:38 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:56.747 19:54:38 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:56.747 19:54:38 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:56.747 19:54:38 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:56.747 19:54:38 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:56.747 19:54:38 -- nvmf/common.sh@656 -- # echo 1 00:23:56.747 19:54:38 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:56.747 19:54:38 -- nvmf/common.sh@658 -- # echo 1 00:23:56.747 19:54:38 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:56.747 19:54:38 -- nvmf/common.sh@661 -- # echo tcp 00:23:56.747 19:54:38 -- nvmf/common.sh@662 -- # echo 4420 00:23:56.747 19:54:38 -- nvmf/common.sh@663 -- # echo ipv4 00:23:56.747 19:54:38 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:56.747 19:54:38 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:56.747 00:23:56.747 Discovery Log Number of Records 2, Generation counter 2 00:23:56.747 =====Discovery Log Entry 0====== 00:23:56.747 trtype: tcp 00:23:56.747 adrfam: ipv4 00:23:56.747 subtype: current discovery subsystem 00:23:56.747 treq: not specified, sq flow control disable supported 00:23:56.747 portid: 1 00:23:56.747 trsvcid: 4420 00:23:56.747 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:56.747 traddr: 10.0.0.1 00:23:56.747 eflags: none 00:23:56.747 sectype: none 00:23:56.747 =====Discovery Log Entry 1====== 00:23:56.747 trtype: tcp 00:23:56.747 adrfam: ipv4 00:23:56.747 subtype: nvme subsystem 00:23:56.747 treq: not specified, sq flow control disable supported 00:23:56.747 portid: 1 00:23:56.747 trsvcid: 4420 00:23:56.747 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:56.747 traddr: 10.0.0.1 00:23:56.747 eflags: none 00:23:56.747 sectype: none 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:56.747 19:54:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:56.747 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.030 Initializing NVMe Controllers 00:24:00.030 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:00.030 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:00.030 Initialization complete. Launching workers. 00:24:00.030 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27369, failed: 0 00:24:00.030 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27369, failed to submit 0 00:24:00.030 success 0, unsuccess 27369, failed 0 00:24:00.030 19:54:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:00.030 19:54:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:00.030 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.315 Initializing NVMe Controllers 00:24:03.315 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:03.315 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:03.315 Initialization complete. Launching workers. 00:24:03.315 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 56391, failed: 0 00:24:03.315 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14198, failed to submit 42193 00:24:03.315 success 0, unsuccess 14198, failed 0 00:24:03.315 19:54:44 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:03.315 19:54:44 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:03.315 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.596 Initializing NVMe Controllers 00:24:06.596 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:06.596 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:06.596 Initialization complete. Launching workers. 00:24:06.596 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55187, failed: 0 00:24:06.596 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13770, failed to submit 41417 00:24:06.596 success 0, unsuccess 13770, failed 0 00:24:06.596 19:54:47 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:06.596 19:54:47 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:06.596 19:54:47 -- nvmf/common.sh@675 -- # echo 0 00:24:06.596 19:54:47 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:06.596 19:54:47 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:06.596 19:54:47 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:06.596 19:54:47 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:06.596 19:54:47 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:06.596 19:54:47 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:06.596 19:54:47 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:07.530 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:07.530 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:07.530 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:07.530 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:07.530 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:07.530 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:07.530 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:07.530 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:07.530 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:07.530 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:07.530 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:07.530 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:07.530 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:07.530 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:07.530 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:07.530 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:08.463 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:08.463 00:24:08.463 real 0m14.253s 00:24:08.463 user 0m4.513s 00:24:08.463 sys 0m3.410s 00:24:08.721 19:54:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:08.722 19:54:49 -- common/autotest_common.sh@10 -- # set +x 00:24:08.722 ************************************ 00:24:08.722 END TEST kernel_target_abort 00:24:08.722 ************************************ 00:24:08.722 19:54:50 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:08.722 19:54:50 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:08.722 19:54:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:08.722 19:54:50 -- nvmf/common.sh@117 -- # sync 00:24:08.722 19:54:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.722 19:54:50 -- nvmf/common.sh@120 -- # set +e 00:24:08.722 19:54:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.722 19:54:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.722 rmmod nvme_tcp 00:24:08.722 rmmod nvme_fabrics 00:24:08.722 rmmod nvme_keyring 00:24:08.722 19:54:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.722 19:54:50 -- nvmf/common.sh@124 -- # set -e 00:24:08.722 19:54:50 -- nvmf/common.sh@125 -- # return 0 00:24:08.722 19:54:50 -- nvmf/common.sh@478 -- # '[' -n 1800727 ']' 00:24:08.722 19:54:50 -- nvmf/common.sh@479 -- # killprocess 1800727 00:24:08.722 19:54:50 -- common/autotest_common.sh@936 -- # '[' -z 1800727 ']' 00:24:08.722 19:54:50 -- common/autotest_common.sh@940 -- # kill -0 1800727 00:24:08.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1800727) - No such process 00:24:08.722 19:54:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1800727 is not found' 00:24:08.722 Process with pid 1800727 is not found 00:24:08.722 19:54:50 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:24:08.722 19:54:50 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:09.659 Waiting for block devices as requested 00:24:09.918 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:09.918 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:09.918 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:10.176 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:10.176 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:10.176 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:10.176 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:10.433 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:10.433 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:10.433 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:10.433 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:10.691 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:10.691 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:10.691 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:10.691 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:10.950 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:10.950 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:10.950 19:54:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:10.950 19:54:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:10.950 19:54:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.950 19:54:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.950 19:54:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.950 19:54:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:10.950 19:54:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.484 19:54:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:13.484 00:24:13.484 real 0m38.137s 00:24:13.484 user 1m1.169s 00:24:13.484 sys 0m9.382s 00:24:13.484 19:54:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:13.484 19:54:54 -- common/autotest_common.sh@10 -- # set +x 00:24:13.484 ************************************ 00:24:13.484 END TEST nvmf_abort_qd_sizes 00:24:13.484 ************************************ 00:24:13.484 19:54:54 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:24:13.484 19:54:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:13.484 19:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:13.484 19:54:54 -- common/autotest_common.sh@10 -- # set +x 00:24:13.484 ************************************ 00:24:13.484 START TEST keyring_file 00:24:13.484 ************************************ 00:24:13.484 19:54:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:24:13.484 * Looking for test storage... 00:24:13.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:24:13.484 19:54:54 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:24:13.484 19:54:54 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.484 19:54:54 -- nvmf/common.sh@7 -- # uname -s 00:24:13.484 19:54:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.484 19:54:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.484 19:54:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.484 19:54:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.484 19:54:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.484 19:54:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.484 19:54:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.484 19:54:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.484 19:54:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.484 19:54:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.484 19:54:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:13.484 19:54:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:13.484 19:54:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.484 19:54:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.484 19:54:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.484 19:54:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.484 19:54:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.484 19:54:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.484 19:54:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.484 19:54:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.484 19:54:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.484 19:54:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.484 19:54:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.484 19:54:54 -- paths/export.sh@5 -- # export PATH 00:24:13.484 19:54:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.484 19:54:54 -- nvmf/common.sh@47 -- # : 0 00:24:13.484 19:54:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.484 19:54:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.484 19:54:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.484 19:54:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.484 19:54:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.484 19:54:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.484 19:54:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.484 19:54:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.484 19:54:54 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:13.484 19:54:54 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:13.484 19:54:54 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:13.484 19:54:54 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:13.484 19:54:54 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:13.484 19:54:54 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:13.484 19:54:54 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:13.484 19:54:54 -- keyring/common.sh@15 -- # local name key digest path 00:24:13.484 19:54:54 -- keyring/common.sh@17 -- # name=key0 00:24:13.484 19:54:54 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:13.484 19:54:54 -- keyring/common.sh@17 -- # digest=0 00:24:13.484 19:54:54 -- keyring/common.sh@18 -- # mktemp 00:24:13.484 19:54:54 -- keyring/common.sh@18 -- # path=/tmp/tmp.dB94kL1rBb 00:24:13.484 19:54:54 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:13.484 19:54:54 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:13.484 19:54:54 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:13.484 19:54:54 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:13.484 19:54:54 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:24:13.484 19:54:54 -- nvmf/common.sh@693 -- # digest=0 00:24:13.484 19:54:54 -- nvmf/common.sh@694 -- # python - 00:24:13.484 19:54:54 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dB94kL1rBb 00:24:13.484 19:54:54 -- keyring/common.sh@23 -- # echo /tmp/tmp.dB94kL1rBb 00:24:13.484 19:54:54 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.dB94kL1rBb 00:24:13.484 19:54:54 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:13.484 19:54:54 -- keyring/common.sh@15 -- # local name key digest path 00:24:13.484 19:54:54 -- keyring/common.sh@17 -- # name=key1 00:24:13.484 19:54:54 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:13.484 19:54:54 -- keyring/common.sh@17 -- # digest=0 00:24:13.484 19:54:54 -- keyring/common.sh@18 -- # mktemp 00:24:13.484 19:54:54 -- keyring/common.sh@18 -- # path=/tmp/tmp.XRmL2ck9PK 00:24:13.484 19:54:54 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:13.484 19:54:54 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:13.484 19:54:54 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:13.484 19:54:54 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:13.484 19:54:54 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:24:13.484 19:54:54 -- nvmf/common.sh@693 -- # digest=0 00:24:13.484 19:54:54 -- nvmf/common.sh@694 -- # python - 00:24:13.484 19:54:54 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XRmL2ck9PK 00:24:13.484 19:54:54 -- keyring/common.sh@23 -- # echo /tmp/tmp.XRmL2ck9PK 00:24:13.484 19:54:54 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.XRmL2ck9PK 00:24:13.484 19:54:54 -- keyring/file.sh@30 -- # tgtpid=1806526 00:24:13.484 19:54:54 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:24:13.484 19:54:54 -- keyring/file.sh@32 -- # waitforlisten 1806526 00:24:13.484 19:54:54 -- common/autotest_common.sh@817 -- # '[' -z 1806526 ']' 00:24:13.484 19:54:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.484 19:54:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:13.484 19:54:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.484 19:54:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:13.484 19:54:54 -- common/autotest_common.sh@10 -- # set +x 00:24:13.485 [2024-04-24 19:54:54.822493] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:24:13.485 [2024-04-24 19:54:54.822568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806526 ] 00:24:13.485 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.485 [2024-04-24 19:54:54.880181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.485 [2024-04-24 19:54:54.990002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.742 19:54:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:13.742 19:54:55 -- common/autotest_common.sh@850 -- # return 0 00:24:13.742 19:54:55 -- keyring/file.sh@33 -- # rpc_cmd 00:24:13.742 19:54:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.742 19:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:13.742 [2024-04-24 19:54:55.246546] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.033 null0 00:24:14.033 [2024-04-24 19:54:55.278624] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.033 [2024-04-24 19:54:55.279133] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:14.033 [2024-04-24 19:54:55.286658] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:14.033 19:54:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.033 19:54:55 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:14.033 19:54:55 -- common/autotest_common.sh@638 -- # local es=0 00:24:14.033 19:54:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:14.033 19:54:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:14.033 19:54:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:14.033 19:54:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:14.033 19:54:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:14.033 19:54:55 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:14.033 19:54:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.033 19:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:14.033 [2024-04-24 19:54:55.294649] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:24:14.033 { 00:24:14.033 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:14.033 "secure_channel": false, 00:24:14.033 "listen_address": { 00:24:14.033 "trtype": "tcp", 00:24:14.033 "traddr": "127.0.0.1", 00:24:14.033 "trsvcid": "4420" 00:24:14.033 }, 00:24:14.033 "method": "nvmf_subsystem_add_listener", 00:24:14.033 "req_id": 1 00:24:14.033 } 00:24:14.033 Got JSON-RPC error response 00:24:14.033 response: 00:24:14.033 { 00:24:14.033 "code": -32602, 00:24:14.033 "message": "Invalid parameters" 00:24:14.033 } 00:24:14.033 19:54:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:14.033 19:54:55 -- common/autotest_common.sh@641 -- # es=1 00:24:14.033 19:54:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:14.033 19:54:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:14.033 19:54:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:14.033 19:54:55 -- keyring/file.sh@46 -- # bperfpid=1806540 00:24:14.033 19:54:55 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:14.033 19:54:55 -- keyring/file.sh@48 -- # waitforlisten 1806540 /var/tmp/bperf.sock 00:24:14.033 19:54:55 -- common/autotest_common.sh@817 -- # '[' -z 1806540 ']' 00:24:14.033 19:54:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:14.033 19:54:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:14.033 19:54:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:14.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:14.033 19:54:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:14.033 19:54:55 -- common/autotest_common.sh@10 -- # set +x 00:24:14.033 [2024-04-24 19:54:55.341307] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:24:14.033 [2024-04-24 19:54:55.341369] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806540 ] 00:24:14.033 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.033 [2024-04-24 19:54:55.400979] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.033 [2024-04-24 19:54:55.517514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.966 19:54:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:14.966 19:54:56 -- common/autotest_common.sh@850 -- # return 0 00:24:14.966 19:54:56 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dB94kL1rBb 00:24:14.966 19:54:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dB94kL1rBb 00:24:15.223 19:54:56 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XRmL2ck9PK 00:24:15.223 19:54:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XRmL2ck9PK 00:24:15.479 19:54:56 -- keyring/file.sh@51 -- # get_key key0 00:24:15.479 19:54:56 -- keyring/file.sh@51 -- # jq -r .path 00:24:15.479 19:54:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:15.479 19:54:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:15.479 19:54:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.736 19:54:57 -- keyring/file.sh@51 -- # [[ /tmp/tmp.dB94kL1rBb == \/\t\m\p\/\t\m\p\.\d\B\9\4\k\L\1\r\B\b ]] 00:24:15.736 19:54:57 -- keyring/file.sh@52 -- # get_key key1 00:24:15.736 19:54:57 -- keyring/file.sh@52 -- # jq -r .path 00:24:15.736 19:54:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:15.736 19:54:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.736 19:54:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:15.736 19:54:57 -- keyring/file.sh@52 -- # [[ /tmp/tmp.XRmL2ck9PK == \/\t\m\p\/\t\m\p\.\X\R\m\L\2\c\k\9\P\K ]] 00:24:15.736 19:54:57 -- keyring/file.sh@53 -- # get_refcnt key0 00:24:15.736 19:54:57 -- keyring/common.sh@12 -- # get_key key0 00:24:15.736 19:54:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:15.736 19:54:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:15.736 19:54:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:15.736 19:54:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.993 19:54:57 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:15.993 19:54:57 -- keyring/file.sh@54 -- # get_refcnt key1 00:24:15.993 19:54:57 -- keyring/common.sh@12 -- # get_key key1 00:24:15.993 19:54:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:15.993 19:54:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:15.993 19:54:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:15.993 19:54:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:16.317 19:54:57 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:16.317 19:54:57 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.317 19:54:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:16.575 [2024-04-24 19:54:57.982621] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.575 nvme0n1 00:24:16.575 19:54:58 -- keyring/file.sh@59 -- # get_refcnt key0 00:24:16.575 19:54:58 -- keyring/common.sh@12 -- # get_key key0 00:24:16.575 19:54:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:16.575 19:54:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:16.575 19:54:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:16.575 19:54:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:16.832 19:54:58 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:16.832 19:54:58 -- keyring/file.sh@60 -- # get_refcnt key1 00:24:16.832 19:54:58 -- keyring/common.sh@12 -- # get_key key1 00:24:16.832 19:54:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:16.832 19:54:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:16.832 19:54:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:16.832 19:54:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.088 19:54:58 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:17.088 19:54:58 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:17.345 Running I/O for 1 seconds... 00:24:18.279 00:24:18.279 Latency(us) 00:24:18.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.279 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:18.279 nvme0n1 : 1.03 4807.14 18.78 0.00 0.00 26250.60 6359.42 40195.41 00:24:18.279 =================================================================================================================== 00:24:18.279 Total : 4807.14 18.78 0.00 0.00 26250.60 6359.42 40195.41 00:24:18.279 0 00:24:18.279 19:54:59 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:18.279 19:54:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:18.538 19:54:59 -- keyring/file.sh@65 -- # get_refcnt key0 00:24:18.539 19:54:59 -- keyring/common.sh@12 -- # get_key key0 00:24:18.539 19:54:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:18.539 19:54:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:18.539 19:54:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:18.539 19:54:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:18.797 19:55:00 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:18.797 19:55:00 -- keyring/file.sh@66 -- # get_refcnt key1 00:24:18.797 19:55:00 -- keyring/common.sh@12 -- # get_key key1 00:24:18.797 19:55:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:18.797 19:55:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:18.797 19:55:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:18.797 19:55:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.057 19:55:00 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:19.057 19:55:00 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:19.057 19:55:00 -- common/autotest_common.sh@638 -- # local es=0 00:24:19.057 19:55:00 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:19.057 19:55:00 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:24:19.057 19:55:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:19.057 19:55:00 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:24:19.057 19:55:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:19.057 19:55:00 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:19.057 19:55:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:19.314 [2024-04-24 19:55:00.704963] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:19.314 [2024-04-24 19:55:00.705440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a6fe0 (107): Transport endpoint is not connected 00:24:19.314 [2024-04-24 19:55:00.706429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a6fe0 (9): Bad file descriptor 00:24:19.314 [2024-04-24 19:55:00.707428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.314 [2024-04-24 19:55:00.707452] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:19.314 [2024-04-24 19:55:00.707478] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.314 request: 00:24:19.314 { 00:24:19.314 "name": "nvme0", 00:24:19.314 "trtype": "tcp", 00:24:19.314 "traddr": "127.0.0.1", 00:24:19.314 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:19.314 "adrfam": "ipv4", 00:24:19.314 "trsvcid": "4420", 00:24:19.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:19.314 "psk": "key1", 00:24:19.314 "method": "bdev_nvme_attach_controller", 00:24:19.314 "req_id": 1 00:24:19.314 } 00:24:19.314 Got JSON-RPC error response 00:24:19.314 response: 00:24:19.314 { 00:24:19.314 "code": -32602, 00:24:19.314 "message": "Invalid parameters" 00:24:19.314 } 00:24:19.314 19:55:00 -- common/autotest_common.sh@641 -- # es=1 00:24:19.314 19:55:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:19.314 19:55:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:19.314 19:55:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:19.314 19:55:00 -- keyring/file.sh@71 -- # get_refcnt key0 00:24:19.314 19:55:00 -- keyring/common.sh@12 -- # get_key key0 00:24:19.314 19:55:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:19.314 19:55:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.314 19:55:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.314 19:55:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:19.572 19:55:00 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:19.572 19:55:00 -- keyring/file.sh@72 -- # get_refcnt key1 00:24:19.572 19:55:00 -- keyring/common.sh@12 -- # get_key key1 00:24:19.572 19:55:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:19.572 19:55:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:19.572 19:55:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:19.572 19:55:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:19.829 19:55:01 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:19.829 19:55:01 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:19.830 19:55:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:20.087 19:55:01 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:20.087 19:55:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:20.345 19:55:01 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:20.345 19:55:01 -- keyring/file.sh@77 -- # jq length 00:24:20.345 19:55:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:20.602 19:55:01 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:20.602 19:55:01 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.dB94kL1rBb 00:24:20.602 19:55:01 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.dB94kL1rBb 00:24:20.602 19:55:01 -- common/autotest_common.sh@638 -- # local es=0 00:24:20.602 19:55:01 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.dB94kL1rBb 00:24:20.602 19:55:01 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:24:20.602 19:55:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.602 19:55:01 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:24:20.602 19:55:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:20.602 19:55:01 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dB94kL1rBb 00:24:20.602 19:55:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dB94kL1rBb 00:24:20.859 [2024-04-24 19:55:02.157181] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dB94kL1rBb': 0100660 00:24:20.859 [2024-04-24 19:55:02.157220] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:20.860 request: 00:24:20.860 { 00:24:20.860 "name": "key0", 00:24:20.860 "path": "/tmp/tmp.dB94kL1rBb", 00:24:20.860 "method": "keyring_file_add_key", 00:24:20.860 "req_id": 1 00:24:20.860 } 00:24:20.860 Got JSON-RPC error response 00:24:20.860 response: 00:24:20.860 { 00:24:20.860 "code": -1, 00:24:20.860 "message": "Operation not permitted" 00:24:20.860 } 00:24:20.860 19:55:02 -- common/autotest_common.sh@641 -- # es=1 00:24:20.860 19:55:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:20.860 19:55:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:20.860 19:55:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:20.860 19:55:02 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.dB94kL1rBb 00:24:20.860 19:55:02 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dB94kL1rBb 00:24:20.860 19:55:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dB94kL1rBb 00:24:21.117 19:55:02 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.dB94kL1rBb 00:24:21.117 19:55:02 -- keyring/file.sh@88 -- # get_refcnt key0 00:24:21.117 19:55:02 -- keyring/common.sh@12 -- # get_key key0 00:24:21.117 19:55:02 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:21.117 19:55:02 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:21.117 19:55:02 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:21.117 19:55:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:21.375 19:55:02 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:21.375 19:55:02 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:21.375 19:55:02 -- common/autotest_common.sh@638 -- # local es=0 00:24:21.375 19:55:02 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:21.375 19:55:02 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:24:21.375 19:55:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:21.375 19:55:02 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:24:21.375 19:55:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:21.375 19:55:02 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:21.375 19:55:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:21.633 [2024-04-24 19:55:02.903293] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.dB94kL1rBb': No such file or directory 00:24:21.633 [2024-04-24 19:55:02.903329] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:21.633 [2024-04-24 19:55:02.903371] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:21.633 [2024-04-24 19:55:02.903384] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:21.633 [2024-04-24 19:55:02.903397] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:21.633 request: 00:24:21.633 { 00:24:21.633 "name": "nvme0", 00:24:21.633 "trtype": "tcp", 00:24:21.633 "traddr": "127.0.0.1", 00:24:21.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:21.633 "adrfam": "ipv4", 00:24:21.633 "trsvcid": "4420", 00:24:21.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:21.633 "psk": "key0", 00:24:21.633 "method": "bdev_nvme_attach_controller", 00:24:21.633 "req_id": 1 00:24:21.633 } 00:24:21.633 Got JSON-RPC error response 00:24:21.633 response: 00:24:21.633 { 00:24:21.633 "code": -19, 00:24:21.633 "message": "No such device" 00:24:21.633 } 00:24:21.633 19:55:02 -- common/autotest_common.sh@641 -- # es=1 00:24:21.633 19:55:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:21.633 19:55:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:21.633 19:55:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:21.633 19:55:02 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:21.633 19:55:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:21.890 19:55:03 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:21.890 19:55:03 -- keyring/common.sh@15 -- # local name key digest path 00:24:21.890 19:55:03 -- keyring/common.sh@17 -- # name=key0 00:24:21.890 19:55:03 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:21.890 19:55:03 -- keyring/common.sh@17 -- # digest=0 00:24:21.890 19:55:03 -- keyring/common.sh@18 -- # mktemp 00:24:21.890 19:55:03 -- keyring/common.sh@18 -- # path=/tmp/tmp.IYrzG1j8Me 00:24:21.890 19:55:03 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:21.890 19:55:03 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:21.890 19:55:03 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:21.890 19:55:03 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:21.890 19:55:03 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:24:21.890 19:55:03 -- nvmf/common.sh@693 -- # digest=0 00:24:21.890 19:55:03 -- nvmf/common.sh@694 -- # python - 00:24:21.890 19:55:03 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IYrzG1j8Me 00:24:21.890 19:55:03 -- keyring/common.sh@23 -- # echo /tmp/tmp.IYrzG1j8Me 00:24:21.890 19:55:03 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.IYrzG1j8Me 00:24:21.890 19:55:03 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IYrzG1j8Me 00:24:21.890 19:55:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IYrzG1j8Me 00:24:22.148 19:55:03 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:22.148 19:55:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:22.405 nvme0n1 00:24:22.405 19:55:03 -- keyring/file.sh@99 -- # get_refcnt key0 00:24:22.405 19:55:03 -- keyring/common.sh@12 -- # get_key key0 00:24:22.405 19:55:03 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:22.405 19:55:03 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.405 19:55:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.405 19:55:03 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:22.663 19:55:04 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:22.663 19:55:04 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:22.663 19:55:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:22.921 19:55:04 -- keyring/file.sh@101 -- # get_key key0 00:24:22.921 19:55:04 -- keyring/file.sh@101 -- # jq -r .removed 00:24:22.921 19:55:04 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.921 19:55:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.921 19:55:04 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:23.178 19:55:04 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:23.178 19:55:04 -- keyring/file.sh@102 -- # get_refcnt key0 00:24:23.178 19:55:04 -- keyring/common.sh@12 -- # get_key key0 00:24:23.178 19:55:04 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:23.178 19:55:04 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:23.179 19:55:04 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:23.179 19:55:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.436 19:55:04 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:23.436 19:55:04 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:23.436 19:55:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:23.694 19:55:04 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:23.694 19:55:04 -- keyring/file.sh@104 -- # jq length 00:24:23.694 19:55:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:23.951 19:55:05 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:23.951 19:55:05 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IYrzG1j8Me 00:24:23.951 19:55:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IYrzG1j8Me 00:24:24.208 19:55:05 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XRmL2ck9PK 00:24:24.208 19:55:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XRmL2ck9PK 00:24:24.208 19:55:05 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:24.208 19:55:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:24.774 nvme0n1 00:24:24.774 19:55:06 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:24.774 19:55:06 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:25.033 19:55:06 -- keyring/file.sh@112 -- # config='{ 00:24:25.033 "subsystems": [ 00:24:25.033 { 00:24:25.033 "subsystem": "keyring", 00:24:25.033 "config": [ 00:24:25.033 { 00:24:25.033 "method": "keyring_file_add_key", 00:24:25.033 "params": { 00:24:25.033 "name": "key0", 00:24:25.033 "path": "/tmp/tmp.IYrzG1j8Me" 00:24:25.034 } 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "method": "keyring_file_add_key", 00:24:25.034 "params": { 00:24:25.034 "name": "key1", 00:24:25.034 "path": "/tmp/tmp.XRmL2ck9PK" 00:24:25.034 } 00:24:25.034 } 00:24:25.034 ] 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "subsystem": "iobuf", 00:24:25.034 "config": [ 00:24:25.034 { 00:24:25.034 "method": "iobuf_set_options", 00:24:25.034 "params": { 00:24:25.034 "small_pool_count": 8192, 00:24:25.034 "large_pool_count": 1024, 00:24:25.034 "small_bufsize": 8192, 00:24:25.034 "large_bufsize": 135168 00:24:25.034 } 00:24:25.034 } 00:24:25.034 ] 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "subsystem": "sock", 00:24:25.034 "config": [ 00:24:25.034 { 00:24:25.034 "method": "sock_impl_set_options", 00:24:25.034 "params": { 00:24:25.034 "impl_name": "posix", 00:24:25.034 "recv_buf_size": 2097152, 00:24:25.034 "send_buf_size": 2097152, 00:24:25.034 "enable_recv_pipe": true, 00:24:25.034 "enable_quickack": false, 00:24:25.034 "enable_placement_id": 0, 00:24:25.034 "enable_zerocopy_send_server": true, 00:24:25.034 "enable_zerocopy_send_client": false, 00:24:25.034 "zerocopy_threshold": 0, 00:24:25.034 "tls_version": 0, 00:24:25.034 "enable_ktls": false 00:24:25.034 } 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "method": "sock_impl_set_options", 00:24:25.034 "params": { 00:24:25.034 "impl_name": "ssl", 00:24:25.034 "recv_buf_size": 4096, 00:24:25.034 "send_buf_size": 4096, 00:24:25.034 "enable_recv_pipe": true, 00:24:25.034 "enable_quickack": false, 00:24:25.034 "enable_placement_id": 0, 00:24:25.034 "enable_zerocopy_send_server": true, 00:24:25.034 "enable_zerocopy_send_client": false, 00:24:25.034 "zerocopy_threshold": 0, 00:24:25.034 "tls_version": 0, 00:24:25.034 "enable_ktls": false 00:24:25.034 } 00:24:25.034 } 00:24:25.034 ] 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "subsystem": "vmd", 00:24:25.034 "config": [] 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "subsystem": "accel", 00:24:25.034 "config": [ 00:24:25.034 { 00:24:25.034 "method": "accel_set_options", 00:24:25.034 "params": { 00:24:25.034 "small_cache_size": 128, 00:24:25.034 "large_cache_size": 16, 00:24:25.034 "task_count": 2048, 00:24:25.034 "sequence_count": 2048, 00:24:25.034 "buf_count": 2048 00:24:25.034 } 00:24:25.034 } 00:24:25.034 ] 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "subsystem": "bdev", 00:24:25.034 "config": [ 00:24:25.034 { 00:24:25.034 "method": "bdev_set_options", 00:24:25.034 "params": { 00:24:25.034 "bdev_io_pool_size": 65535, 00:24:25.034 "bdev_io_cache_size": 256, 00:24:25.034 "bdev_auto_examine": true, 00:24:25.034 "iobuf_small_cache_size": 128, 00:24:25.034 "iobuf_large_cache_size": 16 00:24:25.034 } 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "method": "bdev_raid_set_options", 00:24:25.034 "params": { 00:24:25.034 "process_window_size_kb": 1024 00:24:25.034 } 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "method": "bdev_iscsi_set_options", 00:24:25.034 "params": { 00:24:25.034 "timeout_sec": 30 00:24:25.034 } 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "method": "bdev_nvme_set_options", 00:24:25.034 "params": { 00:24:25.034 "action_on_timeout": "none", 00:24:25.034 "timeout_us": 0, 00:24:25.034 "timeout_admin_us": 0, 00:24:25.034 "keep_alive_timeout_ms": 10000, 00:24:25.034 "arbitration_burst": 0, 00:24:25.034 "low_priority_weight": 0, 00:24:25.034 "medium_priority_weight": 0, 00:24:25.034 "high_priority_weight": 0, 00:24:25.034 "nvme_adminq_poll_period_us": 10000, 00:24:25.034 "nvme_ioq_poll_period_us": 0, 00:24:25.034 "io_queue_requests": 512, 00:24:25.034 "delay_cmd_submit": true, 00:24:25.034 "transport_retry_count": 4, 00:24:25.034 "bdev_retry_count": 3, 00:24:25.034 "transport_ack_timeout": 0, 00:24:25.034 "ctrlr_loss_timeout_sec": 0, 00:24:25.034 "reconnect_delay_sec": 0, 00:24:25.034 "fast_io_fail_timeout_sec": 0, 00:24:25.034 "disable_auto_failback": false, 00:24:25.034 "generate_uuids": false, 00:24:25.034 "transport_tos": 0, 00:24:25.034 "nvme_error_stat": false, 00:24:25.034 "rdma_srq_size": 0, 00:24:25.034 "io_path_stat": false, 00:24:25.034 "allow_accel_sequence": false, 00:24:25.034 "rdma_max_cq_size": 0, 00:24:25.034 "rdma_cm_event_timeout_ms": 0, 00:24:25.034 "dhchap_digests": [ 00:24:25.034 "sha256", 00:24:25.034 "sha384", 00:24:25.034 "sha512" 00:24:25.034 ], 00:24:25.034 "dhchap_dhgroups": [ 00:24:25.034 "null", 00:24:25.034 "ffdhe2048", 00:24:25.034 "ffdhe3072", 00:24:25.034 "ffdhe4096", 00:24:25.034 "ffdhe6144", 00:24:25.034 "ffdhe8192" 00:24:25.034 ] 00:24:25.034 } 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "method": "bdev_nvme_attach_controller", 00:24:25.034 "params": { 00:24:25.034 "name": "nvme0", 00:24:25.034 "trtype": "TCP", 00:24:25.034 "adrfam": "IPv4", 00:24:25.034 "traddr": "127.0.0.1", 00:24:25.034 "trsvcid": "4420", 00:24:25.034 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.034 "prchk_reftag": false, 00:24:25.034 "prchk_guard": false, 00:24:25.034 "ctrlr_loss_timeout_sec": 0, 00:24:25.034 "reconnect_delay_sec": 0, 00:24:25.034 "fast_io_fail_timeout_sec": 0, 00:24:25.034 "psk": "key0", 00:24:25.034 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:25.034 "hdgst": false, 00:24:25.034 "ddgst": false 00:24:25.034 } 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "method": "bdev_nvme_set_hotplug", 00:24:25.034 "params": { 00:24:25.034 "period_us": 100000, 00:24:25.034 "enable": false 00:24:25.034 } 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "method": "bdev_wait_for_examine" 00:24:25.034 } 00:24:25.034 ] 00:24:25.034 }, 00:24:25.034 { 00:24:25.034 "subsystem": "nbd", 00:24:25.034 "config": [] 00:24:25.034 } 00:24:25.034 ] 00:24:25.034 }' 00:24:25.034 19:55:06 -- keyring/file.sh@114 -- # killprocess 1806540 00:24:25.034 19:55:06 -- common/autotest_common.sh@936 -- # '[' -z 1806540 ']' 00:24:25.034 19:55:06 -- common/autotest_common.sh@940 -- # kill -0 1806540 00:24:25.034 19:55:06 -- common/autotest_common.sh@941 -- # uname 00:24:25.034 19:55:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:25.034 19:55:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1806540 00:24:25.034 19:55:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:25.034 19:55:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:25.034 19:55:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1806540' 00:24:25.034 killing process with pid 1806540 00:24:25.034 19:55:06 -- common/autotest_common.sh@955 -- # kill 1806540 00:24:25.034 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.034 00:24:25.034 Latency(us) 00:24:25.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.034 =================================================================================================================== 00:24:25.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.034 19:55:06 -- common/autotest_common.sh@960 -- # wait 1806540 00:24:25.293 19:55:06 -- keyring/file.sh@117 -- # bperfpid=1808009 00:24:25.293 19:55:06 -- keyring/file.sh@119 -- # waitforlisten 1808009 /var/tmp/bperf.sock 00:24:25.293 19:55:06 -- common/autotest_common.sh@817 -- # '[' -z 1808009 ']' 00:24:25.293 19:55:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:25.293 19:55:06 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:25.293 19:55:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:25.293 19:55:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:25.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:25.293 19:55:06 -- keyring/file.sh@115 -- # echo '{ 00:24:25.293 "subsystems": [ 00:24:25.293 { 00:24:25.293 "subsystem": "keyring", 00:24:25.293 "config": [ 00:24:25.293 { 00:24:25.293 "method": "keyring_file_add_key", 00:24:25.293 "params": { 00:24:25.293 "name": "key0", 00:24:25.293 "path": "/tmp/tmp.IYrzG1j8Me" 00:24:25.293 } 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "method": "keyring_file_add_key", 00:24:25.293 "params": { 00:24:25.293 "name": "key1", 00:24:25.293 "path": "/tmp/tmp.XRmL2ck9PK" 00:24:25.293 } 00:24:25.293 } 00:24:25.293 ] 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "subsystem": "iobuf", 00:24:25.293 "config": [ 00:24:25.293 { 00:24:25.293 "method": "iobuf_set_options", 00:24:25.293 "params": { 00:24:25.293 "small_pool_count": 8192, 00:24:25.293 "large_pool_count": 1024, 00:24:25.293 "small_bufsize": 8192, 00:24:25.293 "large_bufsize": 135168 00:24:25.293 } 00:24:25.293 } 00:24:25.293 ] 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "subsystem": "sock", 00:24:25.293 "config": [ 00:24:25.293 { 00:24:25.293 "method": "sock_impl_set_options", 00:24:25.293 "params": { 00:24:25.293 "impl_name": "posix", 00:24:25.293 "recv_buf_size": 2097152, 00:24:25.293 "send_buf_size": 2097152, 00:24:25.293 "enable_recv_pipe": true, 00:24:25.293 "enable_quickack": false, 00:24:25.293 "enable_placement_id": 0, 00:24:25.293 "enable_zerocopy_send_server": true, 00:24:25.293 "enable_zerocopy_send_client": false, 00:24:25.293 "zerocopy_threshold": 0, 00:24:25.293 "tls_version": 0, 00:24:25.293 "enable_ktls": false 00:24:25.293 } 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "method": "sock_impl_set_options", 00:24:25.293 "params": { 00:24:25.293 "impl_name": "ssl", 00:24:25.293 "recv_buf_size": 4096, 00:24:25.293 "send_buf_size": 4096, 00:24:25.293 "enable_recv_pipe": true, 00:24:25.293 "enable_quickack": false, 00:24:25.293 "enable_placement_id": 0, 00:24:25.293 "enable_zerocopy_send_server": true, 00:24:25.293 "enable_zerocopy_send_client": false, 00:24:25.293 "zerocopy_threshold": 0, 00:24:25.293 "tls_version": 0, 00:24:25.293 "enable_ktls": false 00:24:25.293 } 00:24:25.293 } 00:24:25.293 ] 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "subsystem": "vmd", 00:24:25.293 "config": [] 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "subsystem": "accel", 00:24:25.293 "config": [ 00:24:25.293 { 00:24:25.293 "method": "accel_set_options", 00:24:25.293 "params": { 00:24:25.293 "small_cache_size": 128, 00:24:25.293 "large_cache_size": 16, 00:24:25.293 "task_count": 2048, 00:24:25.293 "sequence_count": 2048, 00:24:25.293 "buf_count": 2048 00:24:25.293 } 00:24:25.293 } 00:24:25.293 ] 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "subsystem": "bdev", 00:24:25.293 "config": [ 00:24:25.293 { 00:24:25.293 "method": "bdev_set_options", 00:24:25.293 "params": { 00:24:25.293 "bdev_io_pool_size": 65535, 00:24:25.293 "bdev_io_cache_size": 256, 00:24:25.293 "bdev_auto_examine": true, 00:24:25.293 "iobuf_small_cache_size": 128, 00:24:25.293 "iobuf_large_cache_size": 16 00:24:25.293 } 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "method": "bdev_raid_set_options", 00:24:25.293 "params": { 00:24:25.293 "process_window_size_kb": 1024 00:24:25.293 } 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "method": "bdev_iscsi_set_options", 00:24:25.293 "params": { 00:24:25.293 "timeout_sec": 30 00:24:25.293 } 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "method": "bdev_nvme_set_options", 00:24:25.293 "params": { 00:24:25.293 "action_on_timeout": "none", 00:24:25.293 "timeout_us": 0, 00:24:25.293 "timeout_admin_us": 0, 00:24:25.293 "keep_alive_timeout_ms": 10000, 00:24:25.293 "arbitration_burst": 0, 00:24:25.293 "low_priority_weight": 0, 00:24:25.293 "medium_priority_weight": 0, 00:24:25.293 "high_priority_weight": 0, 00:24:25.293 "nvme_adminq_poll_period_us": 10000, 00:24:25.293 "nvme_ioq_poll_period_us": 0, 00:24:25.293 "io_queue_requests": 512, 00:24:25.293 "delay_cmd_submit": true, 00:24:25.293 "transport_retry_count": 4, 00:24:25.293 "bdev_retry_count": 3, 00:24:25.293 "transport_ack_timeout": 0, 00:24:25.293 "ctrlr_loss_timeout_sec": 0, 00:24:25.293 "reconnect_delay_sec": 0, 00:24:25.293 "fast_io_fail_timeout_sec": 0, 00:24:25.293 "disable_auto_failback": false, 00:24:25.293 "generate_uuids": false, 00:24:25.293 "transport_tos": 0, 00:24:25.293 "nvme_error_stat": false, 00:24:25.293 "rdma_srq_size": 0, 00:24:25.293 "io_path_stat": false, 00:24:25.293 "allow_accel_sequence": false, 00:24:25.293 "rdma_max_cq_size": 0, 00:24:25.293 "rdma_cm_event_timeout_ms": 0, 00:24:25.293 "dhchap_digests": [ 00:24:25.293 "sha256", 00:24:25.293 "sha384", 00:24:25.293 "sha512" 00:24:25.293 ], 00:24:25.293 "dhchap_dhgroups": [ 00:24:25.293 "null", 00:24:25.293 "ffdhe2048", 00:24:25.293 "ffdhe3072", 00:24:25.293 "ffdhe4096", 00:24:25.293 "ffdhe6144", 00:24:25.293 "ffdhe8192" 00:24:25.293 ] 00:24:25.293 } 00:24:25.293 }, 00:24:25.293 { 00:24:25.293 "method": "bdev_nvme_attach_controller", 00:24:25.293 "params": { 00:24:25.293 "name": "nvme0", 00:24:25.293 "trtype": "TCP", 00:24:25.293 "adrfam": "IPv4", 00:24:25.293 "traddr": "127.0.0.1", 00:24:25.293 "trsvcid": "4420", 00:24:25.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.293 "prchk_reftag": false, 00:24:25.293 "prchk_guard": false, 00:24:25.293 "ctrlr_loss_timeout_sec": 0, 00:24:25.293 "reconnect_delay_sec": 0, 00:24:25.293 "fast_io_fail_timeout_sec": 0, 00:24:25.293 "psk": "key0", 00:24:25.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:25.294 "hdgst": false, 00:24:25.294 "ddgst": false 00:24:25.294 } 00:24:25.294 }, 00:24:25.294 { 00:24:25.294 "method": "bdev_nvme_set_hotplug", 00:24:25.294 "params": { 00:24:25.294 "period_us": 100000, 00:24:25.294 "enable": false 00:24:25.294 } 00:24:25.294 }, 00:24:25.294 { 00:24:25.294 "method": "bdev_wait_for_examine" 00:24:25.294 } 00:24:25.294 ] 00:24:25.294 }, 00:24:25.294 { 00:24:25.294 "subsystem": "nbd", 00:24:25.294 "config": [] 00:24:25.294 } 00:24:25.294 ] 00:24:25.294 }' 00:24:25.294 19:55:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:25.294 19:55:06 -- common/autotest_common.sh@10 -- # set +x 00:24:25.294 [2024-04-24 19:55:06.660122] Starting SPDK v24.05-pre git sha1 166ede64d / DPDK 23.11.0 initialization... 00:24:25.294 [2024-04-24 19:55:06.660217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808009 ] 00:24:25.294 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.294 [2024-04-24 19:55:06.717511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.552 [2024-04-24 19:55:06.826494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.552 [2024-04-24 19:55:07.004083] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:26.117 19:55:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:26.117 19:55:07 -- common/autotest_common.sh@850 -- # return 0 00:24:26.117 19:55:07 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:26.117 19:55:07 -- keyring/file.sh@120 -- # jq length 00:24:26.117 19:55:07 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:26.375 19:55:07 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:26.375 19:55:07 -- keyring/file.sh@121 -- # get_refcnt key0 00:24:26.375 19:55:07 -- keyring/common.sh@12 -- # get_key key0 00:24:26.375 19:55:07 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:26.375 19:55:07 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:26.375 19:55:07 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:26.375 19:55:07 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:26.634 19:55:08 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:26.634 19:55:08 -- keyring/file.sh@122 -- # get_refcnt key1 00:24:26.634 19:55:08 -- keyring/common.sh@12 -- # get_key key1 00:24:26.634 19:55:08 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:26.634 19:55:08 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:26.634 19:55:08 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:26.634 19:55:08 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:26.918 19:55:08 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:26.918 19:55:08 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:26.918 19:55:08 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:26.918 19:55:08 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:27.176 19:55:08 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:27.176 19:55:08 -- keyring/file.sh@1 -- # cleanup 00:24:27.176 19:55:08 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IYrzG1j8Me /tmp/tmp.XRmL2ck9PK 00:24:27.176 19:55:08 -- keyring/file.sh@20 -- # killprocess 1808009 00:24:27.176 19:55:08 -- common/autotest_common.sh@936 -- # '[' -z 1808009 ']' 00:24:27.176 19:55:08 -- common/autotest_common.sh@940 -- # kill -0 1808009 00:24:27.176 19:55:08 -- common/autotest_common.sh@941 -- # uname 00:24:27.176 19:55:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:27.176 19:55:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1808009 00:24:27.176 19:55:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:27.176 19:55:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:27.176 19:55:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1808009' 00:24:27.176 killing process with pid 1808009 00:24:27.176 19:55:08 -- common/autotest_common.sh@955 -- # kill 1808009 00:24:27.176 Received shutdown signal, test time was about 1.000000 seconds 00:24:27.176 00:24:27.176 Latency(us) 00:24:27.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.176 =================================================================================================================== 00:24:27.176 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:27.176 19:55:08 -- common/autotest_common.sh@960 -- # wait 1808009 00:24:27.434 19:55:08 -- keyring/file.sh@21 -- # killprocess 1806526 00:24:27.434 19:55:08 -- common/autotest_common.sh@936 -- # '[' -z 1806526 ']' 00:24:27.434 19:55:08 -- common/autotest_common.sh@940 -- # kill -0 1806526 00:24:27.434 19:55:08 -- common/autotest_common.sh@941 -- # uname 00:24:27.434 19:55:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:27.434 19:55:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1806526 00:24:27.434 19:55:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:27.434 19:55:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:27.434 19:55:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1806526' 00:24:27.434 killing process with pid 1806526 00:24:27.434 19:55:08 -- common/autotest_common.sh@955 -- # kill 1806526 00:24:27.434 [2024-04-24 19:55:08.901128] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:27.434 19:55:08 -- common/autotest_common.sh@960 -- # wait 1806526 00:24:27.999 00:24:27.999 real 0m14.756s 00:24:27.999 user 0m35.879s 00:24:27.999 sys 0m3.334s 00:24:27.999 19:55:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:27.999 19:55:09 -- common/autotest_common.sh@10 -- # set +x 00:24:27.999 ************************************ 00:24:27.999 END TEST keyring_file 00:24:27.999 ************************************ 00:24:27.999 19:55:09 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:24:27.999 19:55:09 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:24:27.999 19:55:09 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:24:27.999 19:55:09 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:24:27.999 19:55:09 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:24:27.999 19:55:09 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:24:27.999 19:55:09 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:24:27.999 19:55:09 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:24:27.999 19:55:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:27.999 19:55:09 -- common/autotest_common.sh@10 -- # set +x 00:24:27.999 19:55:09 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:24:27.999 19:55:09 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:24:27.999 19:55:09 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:24:27.999 19:55:09 -- common/autotest_common.sh@10 -- # set +x 00:24:29.898 INFO: APP EXITING 00:24:29.898 INFO: killing all VMs 00:24:29.898 INFO: killing vhost app 00:24:29.898 INFO: EXIT DONE 00:24:30.830 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:24:30.830 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:24:30.830 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:24:30.830 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:24:30.830 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:24:30.830 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:24:30.830 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:24:30.830 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:24:30.830 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:24:30.830 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:24:30.830 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:24:31.087 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:24:31.087 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:24:31.087 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:24:31.087 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:24:31.087 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:24:31.087 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:24:32.461 Cleaning 00:24:32.461 Removing: /var/run/dpdk/spdk0/config 00:24:32.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:32.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:32.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:32.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:32.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:24:32.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:24:32.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:24:32.461 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:24:32.461 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:32.461 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:32.461 Removing: /var/run/dpdk/spdk1/config 00:24:32.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:32.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:32.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:32.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:32.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:24:32.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:24:32.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:24:32.461 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:24:32.461 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:32.461 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:32.461 Removing: /var/run/dpdk/spdk1/mp_socket 00:24:32.461 Removing: /var/run/dpdk/spdk2/config 00:24:32.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:32.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:32.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:32.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:32.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:24:32.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:24:32.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:24:32.461 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:24:32.461 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:32.461 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:32.461 Removing: /var/run/dpdk/spdk3/config 00:24:32.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:32.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:32.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:32.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:32.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:24:32.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:24:32.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:24:32.461 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:24:32.461 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:32.461 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:32.461 Removing: /var/run/dpdk/spdk4/config 00:24:32.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:32.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:32.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:32.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:32.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:24:32.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:24:32.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:24:32.461 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:24:32.461 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:32.461 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:32.461 Removing: /dev/shm/bdev_svc_trace.1 00:24:32.461 Removing: /dev/shm/nvmf_trace.0 00:24:32.461 Removing: /dev/shm/spdk_tgt_trace.pid1578177 00:24:32.461 Removing: /var/run/dpdk/spdk0 00:24:32.461 Removing: /var/run/dpdk/spdk1 00:24:32.461 Removing: /var/run/dpdk/spdk2 00:24:32.461 Removing: /var/run/dpdk/spdk3 00:24:32.461 Removing: /var/run/dpdk/spdk4 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1576460 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1577215 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1578177 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1578667 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1579364 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1579502 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1580239 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1580368 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1580632 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1581900 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1582869 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1583192 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1583386 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1583725 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1583937 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1584098 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1584269 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1584575 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1585045 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1587414 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1587699 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1587871 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1587885 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1588425 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1588563 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1588881 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1589005 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1589541 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1589820 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1589995 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1590134 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1590641 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1590800 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1591009 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1591189 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1591347 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1591555 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1591717 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1591887 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1592164 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1592329 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1592610 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1592780 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1592992 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1593227 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1593390 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1593672 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1593840 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1594120 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1594287 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1594453 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1594729 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1594903 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1595184 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1595356 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1595553 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1595802 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1595996 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1596266 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1598443 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1625226 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1628364 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1634234 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1637551 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1640062 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1640580 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1647989 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1647992 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1648539 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1649194 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1649850 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1650245 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1650258 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1650399 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1650526 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1650537 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1651189 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1651737 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1652391 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1652789 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1652802 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1653051 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1654085 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1654944 00:24:32.461 Removing: /var/run/dpdk/spdk_pid1661074 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1661350 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1664007 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1667723 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1669788 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1676192 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1681537 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1682726 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1683394 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1694191 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1696446 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1699236 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1700418 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1701624 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1701761 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1701898 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1702035 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1702484 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1703802 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1704544 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1704970 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1706712 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1707138 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1707586 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1710111 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1716035 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1718794 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1722452 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1723413 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1724952 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1727844 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1730304 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1734430 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1734545 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1737333 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1737582 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1737724 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1737988 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1738005 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1740622 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1740961 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1743634 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1745616 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1749054 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1752371 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1756839 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1756841 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1769653 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1770180 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1770588 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1771001 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1771710 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1772127 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1772538 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1772951 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1775448 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1775706 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1779513 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1779593 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1781305 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1786230 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1786351 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1789273 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1790683 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1792092 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1792907 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1794363 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1795165 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1801159 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1801436 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1801829 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1803396 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1803798 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1804088 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1806526 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1806540 00:24:32.720 Removing: /var/run/dpdk/spdk_pid1808009 00:24:32.720 Clean 00:24:32.979 19:55:14 -- common/autotest_common.sh@1437 -- # return 0 00:24:32.979 19:55:14 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:24:32.979 19:55:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:32.979 19:55:14 -- common/autotest_common.sh@10 -- # set +x 00:24:32.979 19:55:14 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:24:32.979 19:55:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:32.979 19:55:14 -- common/autotest_common.sh@10 -- # set +x 00:24:32.979 19:55:14 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:24:32.979 19:55:14 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:24:32.979 19:55:14 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:24:32.979 19:55:14 -- spdk/autotest.sh@389 -- # hash lcov 00:24:32.979 19:55:14 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:32.979 19:55:14 -- spdk/autotest.sh@391 -- # hostname 00:24:32.979 19:55:14 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:24:33.236 geninfo: WARNING: invalid characters removed from testname! 00:24:59.766 19:55:41 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:03.945 19:55:45 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:06.510 19:55:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:09.790 19:55:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:12.317 19:55:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:15.600 19:55:56 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:25:18.131 19:55:59 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:18.131 19:55:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.131 19:55:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:18.131 19:55:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.131 19:55:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.131 19:55:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.131 19:55:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.131 19:55:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.131 19:55:59 -- paths/export.sh@5 -- $ export PATH 00:25:18.131 19:55:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.131 19:55:59 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:25:18.131 19:55:59 -- common/autobuild_common.sh@435 -- $ date +%s 00:25:18.131 19:55:59 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713981359.XXXXXX 00:25:18.131 19:55:59 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713981359.9FLsLh 00:25:18.131 19:55:59 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:25:18.131 19:55:59 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:25:18.131 19:55:59 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:25:18.131 19:55:59 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:18.131 19:55:59 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:18.131 19:55:59 -- common/autobuild_common.sh@451 -- $ get_config_params 00:25:18.131 19:55:59 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:25:18.131 19:55:59 -- common/autotest_common.sh@10 -- $ set +x 00:25:18.132 19:55:59 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:25:18.132 19:55:59 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:25:18.132 19:55:59 -- pm/common@17 -- $ local monitor 00:25:18.132 19:55:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:18.132 19:55:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1816721 00:25:18.132 19:55:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:18.132 19:55:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1816724 00:25:18.132 19:55:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:18.132 19:55:59 -- pm/common@21 -- $ date +%s 00:25:18.132 19:55:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1816726 00:25:18.132 19:55:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:18.132 19:55:59 -- pm/common@21 -- $ date +%s 00:25:18.132 19:55:59 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1816729 00:25:18.132 19:55:59 -- pm/common@21 -- $ date +%s 00:25:18.132 19:55:59 -- pm/common@26 -- $ sleep 1 00:25:18.132 19:55:59 -- pm/common@21 -- $ date +%s 00:25:18.132 19:55:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713981359 00:25:18.132 19:55:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713981359 00:25:18.132 19:55:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713981359 00:25:18.132 19:55:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713981359 00:25:18.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713981359_collect-vmstat.pm.log 00:25:18.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713981359_collect-bmc-pm.bmc.pm.log 00:25:18.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713981359_collect-cpu-load.pm.log 00:25:18.132 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713981359_collect-cpu-temp.pm.log 00:25:19.068 19:56:00 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:25:19.068 19:56:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:25:19.068 19:56:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:19.068 19:56:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:19.068 19:56:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:19.068 19:56:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:19.068 19:56:00 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:19.068 19:56:00 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:19.068 19:56:00 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:25:19.068 19:56:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:19.068 19:56:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:19.068 19:56:00 -- pm/common@30 -- $ signal_monitor_resources TERM 00:25:19.068 19:56:00 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:25:19.068 19:56:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:19.068 19:56:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:19.068 19:56:00 -- pm/common@45 -- $ pid=1816743 00:25:19.068 19:56:00 -- pm/common@52 -- $ sudo kill -TERM 1816743 00:25:19.068 19:56:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:19.068 19:56:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:19.068 19:56:00 -- pm/common@45 -- $ pid=1816745 00:25:19.068 19:56:00 -- pm/common@52 -- $ sudo kill -TERM 1816745 00:25:19.068 19:56:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:19.068 19:56:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:19.068 19:56:00 -- pm/common@45 -- $ pid=1816746 00:25:19.068 19:56:00 -- pm/common@52 -- $ sudo kill -TERM 1816746 00:25:19.068 19:56:00 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:19.068 19:56:00 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:19.068 19:56:00 -- pm/common@45 -- $ pid=1816744 00:25:19.068 19:56:00 -- pm/common@52 -- $ sudo kill -TERM 1816744 00:25:19.068 + [[ -n 1493592 ]] 00:25:19.068 + sudo kill 1493592 00:25:19.078 [Pipeline] } 00:25:19.097 [Pipeline] // stage 00:25:19.102 [Pipeline] } 00:25:19.121 [Pipeline] // timeout 00:25:19.126 [Pipeline] } 00:25:19.143 [Pipeline] // catchError 00:25:19.149 [Pipeline] } 00:25:19.166 [Pipeline] // wrap 00:25:19.172 [Pipeline] } 00:25:19.187 [Pipeline] // catchError 00:25:19.194 [Pipeline] stage 00:25:19.196 [Pipeline] { (Epilogue) 00:25:19.211 [Pipeline] catchError 00:25:19.212 [Pipeline] { 00:25:19.228 [Pipeline] echo 00:25:19.229 Cleanup processes 00:25:19.235 [Pipeline] sh 00:25:19.520 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:19.520 1816881 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:25:19.520 1817008 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:19.535 [Pipeline] sh 00:25:19.820 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:25:19.820 ++ grep -v 'sudo pgrep' 00:25:19.820 ++ awk '{print $1}' 00:25:19.820 + sudo kill -9 1816881 00:25:19.832 [Pipeline] sh 00:25:20.116 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:28.242 [Pipeline] sh 00:25:28.523 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:28.523 Artifacts sizes are good 00:25:28.539 [Pipeline] archiveArtifacts 00:25:28.547 Archiving artifacts 00:25:28.803 [Pipeline] sh 00:25:29.099 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:25:29.115 [Pipeline] cleanWs 00:25:29.125 [WS-CLEANUP] Deleting project workspace... 00:25:29.125 [WS-CLEANUP] Deferred wipeout is used... 00:25:29.132 [WS-CLEANUP] done 00:25:29.133 [Pipeline] } 00:25:29.151 [Pipeline] // catchError 00:25:29.162 [Pipeline] sh 00:25:29.440 + logger -p user.info -t JENKINS-CI 00:25:29.448 [Pipeline] } 00:25:29.463 [Pipeline] // stage 00:25:29.468 [Pipeline] } 00:25:29.486 [Pipeline] // node 00:25:29.491 [Pipeline] End of Pipeline 00:25:29.522 Finished: SUCCESS